New Article: Regulating Freedom of Speech on Social Media: Comparing the EU and the US Approach

My article, Regulating Freedom of Speech on Social Media: Comparing the EU and the US Approach, was recently published by Stanford Law School. It is the second TTLF Working Papers I have published.

 

Here is the abstract of the article :

Social media platforms provide forums to share ideas, jokes, images, insults, and threats. These private companies form a contract with their users who agree in turn to respect the platform’s private rules, which evolve regularly and organically, reacting sometimes to a particular event, just as legislatures may do.

As these platforms have a global reach, yet are, for the most part, located in the United States, the articulation between the platforms’ terms of use and the laws of the states where the users are located varies greatly from country to country.

This article proposes to explore the often-tense relationships between the states, the platforms, and the users, whether their speech creates harm or they are a victim of such harm.

The first part of the article is a general presentation of freedom of expression law. This part does not attempt to be a comprehensive catalog of such laws around the world and is only a general presentation of the U.S. and the European Union laws protecting freedom of expression, using France as an example of a particular country in the European Union. While the principle is freedom of speech, the legal standard is set by international conventions, such as the United Nations Universal Declaration of Human Rights or the European Convention on Human Rights.

The second part of the article presents what the author believes to be the four main justifications for regulating free speech: protecting the public order, protecting the reputation of others, protecting morality, and advancing knowledge and truth. The protection of public order entails the protection of the flag or the king, and lèse- majesté sometimes survives even in a Republic. The safety of the economic market, which may dangerously sway if false information floats online, is another state concern, as is the personal safety of the public. Speech sometimes does harm, even kill, or place an individual in fear for her or his life. The reputation and honor of others is easily smeared on social media, whether by defamation, insults or hate speech, a category of speech not clearly defined by law, which yet is at the center of the debate on online content moderation, including whether there is a right to speak anonymously online. What is “morality” is another puzzling question, as blasphemy, indecency, even pornography, have different legal definitions around the world and private definitions by the platforms. Even truth is an elusive concept, and both states and platforms struggle to define what is “fake news,” and whether what is clearly false information, such as denying the existence of the Shoah, should be allowed to be published online. Indeed, while four justifications for regulating speech are delineated in this article, the speech and conduct which should be considered an attack on values worthy to be protected is not equally considered by the different states and the different platforms, and how the barriers to speech are being placed provides a telling picture of the state of democracy.

The third part examines who should have the power to delete speech on social media. States may exert censorship on the platforms or even on the pipes to block access to speech and punish, sometimes harshly, speakers daring to trespass the barriers to free speech erected by the states. For the sake of democracy, the integrity of the electoral process must not be threatened by false information, whether it spreads false information about the candidates or false information about alleged fraud, or even false information about the result of the vote.

Social media platforms must respect the law. In the United States, Section 230 of the Communications Decency Act of 1996 provides immunity to platforms for third-party content, but also for screening offensive content. Section 230 has been modified several times and many bills, from both sides of the political spectrum, aim at further reform. In the European Union, the E-commerce Directive similarly provides a safe harbor to social media platforms, but the law is likely to change soon, as the Digital Services Act proposal was published in December 2020. The platforms have their own rules, and may even soon have their own private courts, for example the recently created Facebook Oversight Board. However, other private actors may have a say on what can be published on social media, for instance employers or the governing bodies of regulated professions, such as judges or politicians. Even private users may censor the right of others to speak freely, using copyright laws, or may use public shaming to fear speakers into silence. Such fear may lead users to self-censor their speech, to the detriment of the marketplace of ideas, or they may choose to delete controversial messages. Public figures, however, may not have the right to delete social media posts or to block users.

The article was finished the last days of 2020, a year which saw attempts to use social media platforms to sway the U.S. elections by spreading false information, the semi- failed attempt of France to pass a law protecting social media users against hate speech, and false news about the deadly Covid-19 virus spreading online like wildfire, through malicious or naïve posts. A few days after the article was completed, the U.S. Capitol was attacked, on January 6, 2021, by a seditious mob seeking to overturn the results of the Presidential election, believing that the election had been rigged, a false information amplified by thousands of users on social media, including the then President of the United States. Several social media platforms responded by blocking the President’s social media accounts, either temporarily or permanently, as did Twitter.

 

 

Facebooktwitterredditpinterestlinkedinmailby feather

All the President’s Tweets… and Section 230 of the CDA

On May 28, 2020, President Donald Trump issued the Executive Order on Preventing Online Censorship (EO) directing the Secretary of Commerce, in consultation with the Attorney General, to request that the Federal Communications Commission “expeditiously propose” regulations to clarify when a provider of an interactive computer service screening offensive content under Section 230 (c)(2)(A) of the Communications Decency Act (CDA) would not be able to benefit from the Good Samaritan provision of the CDA.

Twitter called the EO “a reactionary and politicized approach to a landmark law.” An executive order is not a law, and Congress cannot overturn the order. However, the EO requires a clarification of the law. Law are clarified and interpreted by the courts; they are not clarified by government agencies.

The CDA is an essential law of the web

The CDA is an important federal law, as, without it, the web as we know it would not be able to function, as intermediaries would be constantly held liable for torts, such as defamation, and would have to defend themselves in courts.

The law was passed by Congress after a New York court held in Stratton Oakmont, Inc., v. Prodigy Servs. that the operator of a computer bulletin board, where a third party had posted defamatory allegations, was a publisher. Congress explained in 1996 that “[one] of the specific purposes of [section 230] is to overrule Stratton-Oakmont v. Prodigy and any other similar decisions which have treated such providers and users as publishers or speakers of content that is not their own because  they have restricted access to objectionable  material.”

What led to this EO

The EO was signed following Twitter’s decision to add a civic integrity notice to two tweets posted by the President on May 26, 2020 which alleged mail-in ballot fraud in California (see here and here). The warning is a link reading “! Get the facts about mail-in ballots” and leads to a page offering a counter view. Twitter explained that it had done so “as part of our efforts to enforce our civic integrity policy. We believe those Tweets could confuse voters about what they need to do to receive a ballot and participate in the election process.”

President Trump reacted to this move on Twitter, posting “….Twitter is completely stifling FREE SPEECH, and I, as President, will not allow it to happen!

The President is taking the view that his freedom of speech has been abridged by Twitter, a private company. The First Amendment of the Constitution does not generally protect freedom of speech, but instead prevents Congress from passing laws abridging freedom of speech. This means private companies may choose to set their own policies regulating speech, if they do not violate laws (this would be the case, for instance, if a private policy violated the Civil Rights Act).

The EO states that:

Twitter now selectively decides to place a warning label on certain tweets in a manner that clearly reflects political bias.  As has been reported, Twitter seems never to have placed such a label on another politician’s tweet.  As recently as last week, Representative Adam Schiff was continuing to mislead his followers by peddling the long-disproved Russian Collusion Hoax, and Twitter did not flag those tweets.  Unsurprisingly, its officer in charge of so-called ‘Site Integrity’ has flaunted his political bias in his own tweets.

The EO appears to be less an official order than a personal message from the President lashing out at Twitter, at one of its employees in charge of Site Integrity, who has been named in another of the  President’s tweets, and even at Representative Adam Schiff, the lead impeachment manager in the case which led to the impeachment of the President.

The Good Samaritan provisions of the CDA

Section 230 (c)(1) of the CDA created a safe harbor for providers and users of an interactive computer service for blocking and screening offensive material.

The CDA definition of “interactive computer service” includes information service, system, or access software provider providing or enabling computer access by multiple users to a computer server. They are, for instance, webhosts, search engines, e-commerce, and, yes, social media platforms, such as Twitter.

They are immune because they only act as intermediaries of third-party content. As such, they cannot be held liable for content posted through their services. They are intermediaries, not publishers.

The immunity for screening offensive content of the CDA

Section 230 (c)(2)(A) excludes provider or user of interactive computer services from civil liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

The EO argues:

It is the policy of the United States to ensure that, to the maximum extent permissible under the law, this provision is not distorted to provide liability protection for online platforms that — far from acting in “good faith” to remove objectionable content — instead engage in deceptive or pretextual actions (often contrary to their stated terms of service) to stifle viewpoints with which they disagree.

The argument is that platforms are taking advantage of their power to screen offensive content, even content protected by the First Amendment, to promote their point of view.

What does the EO aim to achieve?

The EO argues that:

In a country that has long cherished the freedom of expression, we cannot allow a limited number of online platforms to hand pick the speech that Americans may access and convey on the internet.  This practice is fundamentally un-American and anti-democratic.  When large, powerful social media companies censor opinions with which they disagree, they exercise a dangerous power.  They cease functioning as passive bulletin boards, and ought to be viewed and treated as content creators.

The argument is that social media platforms have now taken the role of a publisher and should no longer being able to be protected by Section 230 safe harbor.

The EO calls for the clarification of the scope of Section 230 immunity, arguing that “the immunity should not extend beyond its text and purpose to provide protection for those who purport to provide users a forum for free and open speech, but in reality use their power over a vital means of communication to engage in deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints.”

The scope of section 230 immunity is at stake

The EO further argues that:

When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct.  It is the policy of the United States that such a provider should properly lose the limited liability shield of subparagraph (c)(2)(A) and be exposed to liability like any traditional editor and publisher that is not an online provider.

The “criteria” of Section 230 (c)(2)(A) is not clear. The Court of appeals for the Ninth Circuit recently noted in Enigma Software Group USA v. Malwarebytes, Inc. that the term “otherwise objectionable” is a “catchall” phrase, citing Judge Fisher’s concurring  opinion in Zango, Inc. v. Kaspersky Lab, Inc., and reviewed the legislative history of the CDA, as a law aiming at protecting minors from online pornography. The Ninth Circuit court recognized in Enigmathat interpreting the statute to give providers unbridled discretion to block online content would, as Judge Fisher warned, enable and potentially motivate internet-service providers to act for their own, and not the public, benefit.”

The EO argues that the purpose of Section 203 (c) is “narrow,” thus appearing to argue that the CDA’s goal was only to protect users against pornography. However, the Supreme Court held in 1997, in Reno v. ACLU , that two provisions of the CDA, one for imposing sanctions for knowingly transmitting obscene or indecent messages, the other for sending patently offensive material to minors, were unconstitutional as abridging freedom of speech.

Yet, the CDA played in vital role in the development of the web as we know it, including social media, even after being stripped from provisions which had, in essence, given it its name, the Communications Decency Act… Therefore, the scope of Section 230 (c)(2)(A) is likely broader than pornography (obscenity is not protected by the First Amendment, see Roth v. United States.)

Congressional statutory findings for the CDA stated that interactive computer services “offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity,” and appeared to have broader goals for passing the law. It is an essential law for the web as we know it to operate. While the EO aims at preventing online censorship, it would likely lead to constant censorship, to the point that the social media business model may be seriously impacted, while impairing the robust marketplace of ideas, which is ideally created by the First Amendment.

Do facts still exist?

The tweets which had warranted the Twitter warning read:

There is NO WAY (ZERO!) that Mail-In Ballots will be anything less than substantially fraudulent. Mail boxes will be robbed, ballots will be forged & even illegally printed out & fraudulently signed. The Governor of California is sending Ballots to millions of people, anyone….. living in the state, no matter who they are or how they got there, will get one. That will be followed up with professionals telling all of these people, many of whom have never even thought of voting before, how, and for whom, to vote. This will be a Rigged Election. No way!

These statements have not been substantiated and thus breached Twitter’s civic integrity policy, under which Twitter’s services cannot be used “for the purpose of manipulating or interfering in elections or other civic processes.”

Twitter’s notice page featured several tweets taking the view that the President’s voting fraud claims were unsubstantiated, included a tweet featuring a link to a press release from the office of California Governor Gavin Newsom about his executive order requiring mail-in ballots to be sent to each Californian registered voter for the November General Election.

However, even a statement coming from an official source may soon no longer be trustworthy. On May 29, a tweet from the White House account was flagged by Twitter as breaching the platform’s glorification of violence policy.  Are we indeed living in a post-truth world?

Facebooktwitterredditpinterestlinkedinmailby feather