New Research Project Comparing How EU Digital Service Act and Facebook Oversight Board Empower Users

My new research project as a Fellow of the Transatlanctic Technology Law Forum is Comparing the powers given to users by the EU Digital Service Act and by the Facebook Oversight Board: at the service of users or overseeing them?

Abstract:
The European Union Commission published on December 15, 2020, its Proposal for a Regulation on a Single Market For Digital Services (Digital Services Act, “DSA”). This new horizontal framework aims at being a lex generalist which will be without prejudice to the e-Commerce Directive, adopted in 2003, and to the Audiovisual Media Services Directive, revised in 2018.

This new Regulation is likely to dramatically change how online platforms, particularly social media platforms, moderate content posted by their users. Very large platforms, defined by the DSA as ones providing monthly services on average to 45 million active users in the European Union, will face heightened responsibilities, such as assessing and mitigating risks of disseminating illegal content. Whether this heightened scrutiny may lead illegal speech to migrate to smaller platforms remains to be observed.

Each Member State will design its Digital Service Coordinator, a primary national authority ensuring the consistent application of the DSA. Article 47 of the DSA would establish the European Board for Digital Services, which will independently advise the Digital Services Coordinators on supervising intermediary services providers. A few months earlier, the creation of Facebook’s Oversight Board (“the Oversight Board”) was met with skepticism, but also hope and great interest. Dubbed by some as “Facebook’s Supreme Court,” its nature is intriguing, appearing to be both a corporate board of the U.S. public company Facebook and a private court of law, with powers to regulate the freedom of expression of Facebook and Instagram users around the world. This independent body has for its main mission to issue recommendations on Facebook’s content policies and to decide whether Facebook may, or may not, keep or remove content published on its two platforms, Facebook and Instagram. The Oversight Board’s decisions are binding on Facebook, unless implementing them violates the law.

The law of the States remains thus the ultimate arbiter of the illegality of content. However, such legal corpus is far from being homogeneous. Both the European Court of Justice and the European Court of Human Rights have addressed the issue of illegal content. In the European Union, several instruments, such as the revised Audiovisual Media Services Directive and the upcoming  Regulation on preventing the dissemination of terrorist content online, address harmful, illegal content. Member States have each their own legal definitions. France is currently anticipating the DSA by amending its June 21, 2004 Law for Confidence in the Digital Economy. If the French bill becomes law, platforms will have to make public the resources devoted to the fight against illegal content, and will have to implement procedures, as well as human and technological resources, to inform judicial or administrative authorities, as soon as possible, of actions taken following an injunction by these courts or authorities.

The DSA does provide a definition of illegal content but aims at harmonizing due diligence obligations of the platforms, and how they address the issue. For instance, article 12 of the DSA is likely to influence how platforms’ terms and conditions are written, as it directs platforms to inform users publicly, clearly, and unambiguously, about the platform’s content moderation policies and procedures, including the roles played by algorithms and by human review.

This requirement is particularly welcome as the use of algorithms to control speech and amplify messages currently suffers from a lack of transparency, and has been credibly accused of fostering bias: is algorithm bias illegal content? The Oversight Board will only examine a few posts among the billions posted each year, and the cases submitted for approval may be the proverbial trees hiding in the forest: the use of algorithms to control speech by amplifying messages likely to trigger reactions, clicks, likes, often monetizing hate and preventing minority speech to be heard. As freedom of expression is not only the right to impart, but also to receive information, are algorithms, which are striving to keep us in well-delimited echo chambers, the ultimate violation of the freedom of expression?

Users are provided a way to have a say about what should be considered illegal online speech, and both the DSA and the Oversight Board aim at providing users more power. The DSA would provide users a user-friendly complaint and redress mechanism and the right to be provided an explanation if content they have posted is taken down by a platform. The Oversight Board is using a crowdsourcing scheme allowing users all around the world to provide their opinion of a particular case before it is decided.

International users will also be provided the power to define and shape illegal content. Is this a new form of private due process? The DSA would provide several new rights to users: the right to send a notice (article 14  on “Notice and action mechanisms”); the right to complain (article 17 on “Internal complaint-handling system”); the right to judicial redress (article 18 on “Out-of-court dispute settlement”); a special right to flag, if considered particularly trustworthy (article 19 on “Trusted Flaggers” whose opinion will be given more weight than common users); the right to be provided an explanation (article 15 of the Digital Service Act on “Statement of reasons”).

The Oversight Board also gives powers to users: they have the power to appeal a decision made by Facebook about content posted on its two platforms and are also provided the opportunity to share their opinions, knowledge, or expertise in each case. Users are invited to share their explanation of local idioms and standards, shedding further light on what content should be considered illegal. As such, they are given an easy way to file an “amicus brief”‘ comment.

Surprisingly, both the DSA and the Oversight Board appear to have the fundamental rights of users as ultimate standards in their mission. As stated by the DSA Explanatory Memorandum, the DSA “seeks to improve users’ safety online across the entire Union and improve the protection of their fundamental rights.” More curiously, the Oversight Board has shed new light on international human rights. The Oversight Board’s first decisions, issued on January 28, 2021, were not based on U.S. laws or on the private rules of the Facebook platform, but on international human rights laws. As such, it can be argued that their decisions were “based on European values – including the respect of human rights, freedom, democracy, equality and the rule of law,” the same values which are the foundations of the DSA. This research paper will analyze the decisions of the Oversight Board during its first year, including how the cases were chosen, where the speaker of the speech at stake lives, and the legal issues raised by the cases.

However, assessing the legality of content in accordance with the law is still the prerogative of the Courts. The DSA provides them the right to be informed by the platforms on the effect of their order to act against a specific illegal content (article 8). This power is, however, also provided by the DSA to administrative authorities. Will the administrative judge soon have more power than the judiciary to protect freedom of expression? The DSA may lead human rights to become a mere part of corporate compliance, overseen by administrative authorities, as is, arguably, already the case with the right to privacy under the GDPR. Would tortious conduct and civil responsibility be more adequate to protect the rights of users?

Regardless of their differences, the DSA and the work of the Oversight Board may have a  common goal, transparency. The DSA aims at setting a higher standard of transparency and accountability, as its article 13 on “Transparency reporting obligations for providers of intermediary services” directs platforms to publish, at least once a year, a clear and comprehensible report on their content moderation practices. Very large online platforms will have to perform external risk auditing and public accountability. The Oversight Board is committed to publicly sharing written statements about its decisions and rationale.

This research paper will compare the place and power provided to victims of illegal speech by the Oversight Board’s caselaw and recommendations, with the place and power provided to victims of illegal speech by the platforms in their DSA compliance practice. It will also examine how the Oversight Board’s decisions have been implemented by Facebook. Are the future European Union Digital Service Act and the corporate Oversight Board at the service of users or they overseeing them?

Facebooktwitterredditpinterestlinkedinmailby feather

New Article: Regulating Freedom of Speech on Social Media: Comparing the EU and the US Approach

My article, Regulating Freedom of Speech on Social Media: Comparing the EU and the US Approach, was recently published by Stanford Law School. It is the second TTLF Working Papers I have published.

 

Here is the abstract of the article :

Social media platforms provide forums to share ideas, jokes, images, insults, and threats. These private companies form a contract with their users who agree in turn to respect the platform’s private rules, which evolve regularly and organically, reacting sometimes to a particular event, just as legislatures may do.

As these platforms have a global reach, yet are, for the most part, located in the United States, the articulation between the platforms’ terms of use and the laws of the states where the users are located varies greatly from country to country.

This article proposes to explore the often-tense relationships between the states, the platforms, and the users, whether their speech creates harm or they are a victim of such harm.

The first part of the article is a general presentation of freedom of expression law. This part does not attempt to be a comprehensive catalog of such laws around the world and is only a general presentation of the U.S. and the European Union laws protecting freedom of expression, using France as an example of a particular country in the European Union. While the principle is freedom of speech, the legal standard is set by international conventions, such as the United Nations Universal Declaration of Human Rights or the European Convention on Human Rights.

The second part of the article presents what the author believes to be the four main justifications for regulating free speech: protecting the public order, protecting the reputation of others, protecting morality, and advancing knowledge and truth. The protection of public order entails the protection of the flag or the king, and lèse- majesté sometimes survives even in a Republic. The safety of the economic market, which may dangerously sway if false information floats online, is another state concern, as is the personal safety of the public. Speech sometimes does harm, even kill, or place an individual in fear for her or his life. The reputation and honor of others is easily smeared on social media, whether by defamation, insults or hate speech, a category of speech not clearly defined by law, which yet is at the center of the debate on online content moderation, including whether there is a right to speak anonymously online. What is “morality” is another puzzling question, as blasphemy, indecency, even pornography, have different legal definitions around the world and private definitions by the platforms. Even truth is an elusive concept, and both states and platforms struggle to define what is “fake news,” and whether what is clearly false information, such as denying the existence of the Shoah, should be allowed to be published online. Indeed, while four justifications for regulating speech are delineated in this article, the speech and conduct which should be considered an attack on values worthy to be protected is not equally considered by the different states and the different platforms, and how the barriers to speech are being placed provides a telling picture of the state of democracy.

The third part examines who should have the power to delete speech on social media. States may exert censorship on the platforms or even on the pipes to block access to speech and punish, sometimes harshly, speakers daring to trespass the barriers to free speech erected by the states. For the sake of democracy, the integrity of the electoral process must not be threatened by false information, whether it spreads false information about the candidates or false information about alleged fraud, or even false information about the result of the vote.

Social media platforms must respect the law. In the United States, Section 230 of the Communications Decency Act of 1996 provides immunity to platforms for third-party content, but also for screening offensive content. Section 230 has been modified several times and many bills, from both sides of the political spectrum, aim at further reform. In the European Union, the E-commerce Directive similarly provides a safe harbor to social media platforms, but the law is likely to change soon, as the Digital Services Act proposal was published in December 2020. The platforms have their own rules, and may even soon have their own private courts, for example the recently created Facebook Oversight Board. However, other private actors may have a say on what can be published on social media, for instance employers or the governing bodies of regulated professions, such as judges or politicians. Even private users may censor the right of others to speak freely, using copyright laws, or may use public shaming to fear speakers into silence. Such fear may lead users to self-censor their speech, to the detriment of the marketplace of ideas, or they may choose to delete controversial messages. Public figures, however, may not have the right to delete social media posts or to block users.

The article was finished the last days of 2020, a year which saw attempts to use social media platforms to sway the U.S. elections by spreading false information, the semi- failed attempt of France to pass a law protecting social media users against hate speech, and false news about the deadly Covid-19 virus spreading online like wildfire, through malicious or naïve posts. A few days after the article was completed, the U.S. Capitol was attacked, on January 6, 2021, by a seditious mob seeking to overturn the results of the Presidential election, believing that the election had been rigged, a false information amplified by thousands of users on social media, including the then President of the United States. Several social media platforms responded by blocking the President’s social media accounts, either temporarily or permanently, as did Twitter.

 

 

Facebooktwitterredditpinterestlinkedinmailby feather

Delfi v. Estonia: Ice Roads and Chilling Effect at the European Court of Human Rights

The Grand Chamber of the European Court of Human Rights (EC3090121882_f6eb44f941_zHR) held on June 16, 2015 that the freedom of expression of an Estonian news portal had not been infringed when found liable for defamatory comments posted by third parties on its site. The case is Delfi AS v. Estonia, and has sent ripples online as many fear it signals that the Strasbourg Court may favor protection of reputation over online freedom of speech.

Delfi is one the largest Internet news portals in Estonia and publishes hundreds of news articles every day. At the time of the facts of the case, visitors could leave comments which were automatically uploaded on the site. Delfi did not edit or moderate them, but had a notice-and-take down system which allowed visitors to mark comments as being insulting or hatred-inciting. The system also automatically deleted comments with obscene words.

Delfi published in January 2006 an article about SLK, an Estonian ferry company, claiming that SLK had destroyed public ice roads, used for free in the winter to cross the frozen sea between the Estonian mainland and some islands instead of having to take a ferry. While the article itself was balanced, it triggered 185 comments in two days, 20 of which were offensively derogatory or even threats against L., SLK’s sole shareholder. L.’s attorneys requested Delfi to remove these comments, which it did in March 2006, but refused to pay some 32,000 Euros for damages, as requested by L.

  1. filed a civil suit against Delfi, which was dismissed as the County court found the news portal to be sheltered from liability by the Estonian Information Society Services Act, based on the Directive 2003/31/EC (the e-Commerce Directive). This directive exempts from liability, whether it is copyright infringement or defamation, information society service providers acting as a mere information conduit.
  2. appealed, won, and the case was remanded to the County Court which ruled this time in favor of L., finding the Information Society Services Act inapplicable and holding Delfi to be the publisher of the comments. The Estonian Supreme Court dismissed Delfi’s appeal, reasoning that Delfi was a content provider, not an information society service provider, because it had “integrated the comments environment into its news portal, inviting visitors to the website to complement the news with their own judgments … and opinions

Delfi then set up a team of moderators in charge of reviewing comments and deleting inappropriate ones, and applied to the European Court of Human Rights (ECHR), claiming that holding it liable for comments posted by third parties on its web site violated its freedom of expression as protected by article 10 of the Convention for the Protection of Human Rights and Fundamental Freedoms (the Convention). The Court found unanimously in October 2013 that article 10 had not been violated [I wrote about the judgment here]. The case was then refereed to the Grand Chamber.

Liability Exemption Provided by the e-Commerce Directive

Recital 44 of the e-Commerce Directive states that “[a] service provider can benefit from the exemptions for ‘mere conduit’ … when he is in no way involved with the information transmitted.” Article 12.1 of the Directive explains that a provider is a “mere conduit” if it did not initiate the transmission, did not select its receiver and did “not select or modify the information contained in the transmission.

Indeed, Article 14 of the e-Commerce Directive gives providers a safe harbor from liability for information stored at the request of a recipient of the service if it did not have actual knowledge of the illegal activity and if it “expeditiously” removed or disabled access to the information. Providers, however, do not have a general obligation to monitor information they transmit or store nor do they have “a general obligation actively to seek facts or circumstances indicating illegal activity” (e-Commerce  Directive Article 15).

The Supreme Court had found Delfi to be a publisher because it had “integrated the comments environment into its news portal, inviting visitors to the website to complement the news with their own judgments and opinions… [and it] has an economic interest in the posting of comments” (§ 13). Also, Delfi “can determine which of the comments added will be published and which will not be published.” Because Delfi “governs the information stored in the comment environment, [it] provides a content service.” The Supreme Court added that “[p]ublishing of news and comments on an Internet portal is also a journalistic activity”(§ 31).

User-Generated Content, Unprotected Speech and Democracy

Delfi argued that user-generated content (UGC) is of high importance as it often raises “serious debates in society and even informed journalists of issues that were not publicly known” (§ 66). The Court acknowledged this, but pointed out that the Internet allows defamatory and hate speech to be disseminated “in a matter of seconds and sometimes remain persistently available online” adding that “[t]hese two conflicting realities lie at the heart of this case” (§ 110).

The Grand Chamber seemed to attach great importance to the fact that some comments were defamatory, as it further stated that “the risk of harm posed by content and communications on the Internet to the exercise and enjoyment of human rights and freedoms, particularly the right to respect for private life, is certainly higher than that posed by the press (§ 133) and “reiterat[ed] that the right to protection of reputation is a right which is protected by Article 8 of the Convention as part of the right to private life” (§ 137). The Grand Chamber noted that most of the comments published about L. “amounted to hate speech or incitements to violence,” even though only 20 comments over the 185 were offensive (§ 17).

These 20 comments are not protected by Article 10 and thus “the freedom of expression of the authors of the comments is not at issue in the present case” (§ 140). However, one can argue that their freedom of expression has indeed been violated, as their comments were deleted by Delfi and the Estonian Courts found their deletion to be necessary. In his concurring opinion, Judge Zupančič even wrote that “it is completely unacceptable that an Internet portal or any kind of mass media should be permitted to publish any kind of anonymous comments” (Zupančič concurring opinion p.45), a rather surprising statement from a human rights judge, as the right to speak anonymously is a beacon of democracy.

No Violation of Article 10

Article 10 §1 provides for a general right to freedom of expression, which can, however, be restricted under Article 10 §2 as prescribed by law and if “necessary in a democratic society” for various reasons, amongst them protection of the reputation of a third party. Delfi argued that requiring it to monitor third-party content was interference with its freedom of expression which was not “prescribed by law” and was not necessary in a democratic society. It argued further that the Supreme Court judgment had a “chilling effect” on freedom of expression (§ 73).

The Grand Chamber found this interference was indeed prescribed by law, as it was “foreseeable that a media publisher running a Internet news portal for an economic purpose could, in principle, be held liable under domestic law for the uploading of clearly unlawful comments” (§ 128).

The Grand Chamber also found this interference to be “necessary in a democratic society,” which implies, under well settled ECHR case law, that there is a “pressing social need,” the existence of which can be assessed by Member States with “a certain margin of appreciation.” The ECHR only reviews if the interference is “proportionate to the legitimate aim pursued” and if the reasons set forth by the Member State for such interference are indeed “relevant and sufficient” (§ 131).

In order to assess whether the Estonian courts had breached Article 10 of the Convention, the Grand Chamber examined the context of the comments, the liability of the author of the comments as an alternative to Delfi’s responsibility, and the measures taken by Delfi to prevent or remove these comments, and the consequences of the domestic proceedings for Delfi.

The Grand Chamber first examined the context of the comments, acknowledging that the article published by Delfi was balanced, but that such an article may nevertheless “provoke fierce discussions on the Internet” (§ 144). The authors of the comments could not modify or delete them after having posted them; only Delfi “had the technical means to do this” and thus its involvement “went beyond that of a passive, purely technical service provider”(§ 146).

The Grand Chamber then examined then if the liability of the authors of the comments could serve “as a sensible alternative to the liability of the Internet news portal.” The Court was “mindful of the interest of Internet users in not disclosing their identity.” However, this right “must be balanced against other rights and interests.” The Grand Chamber cited the famous Court of Justice of the European Union (CJEU) Google Spain and Google “right to be forgotten” case, noting that the ECJ “found that the individual’s fundamental rights, as a rule, overrode the economic interests of the search engine operator and the interests of other Internet users” (§ 147). The Grand Chamber also cited K.U. v. Finland, where the ECHR held it was not sufficient that the victim of an online crime could obtain damages from the service provider, but must also be able to obtain reparation by the author of the crime”(§ 149), and Krone Verlags GmbH & Co. KG v. Austria , where the ECHR found that shifting the risk of the defamed person obtaining redress to the media company was not a disproportionate interference with the media company’s freedom of expression (§ 151).

As for the measures taken by Delfi, the Grand Chamber first noted that it was not clear whether the Supreme Court held that Delfi had to prevent the posting of unlawful comments or if removing them quickly would have been enough not to be found liable. The Grand Chamber decided that the latter interpretation was the correct one, and that this legal requirement was not a disproportionate interference with Delfi’s freedom of expression” (§ 153). In a joint concurring opinion, Judges Raimondi, Karakas, De Gaetano and Kjølbro noted that interpreting the Supreme Court as having held that Delfi had to entirely prevent postings of unlawful comments “would in practice imply that the portal would have to pre-monitor each and every user-generated comment in order to avoid liability or any unlawful comments. This could in practice lead to a disproportionate interference with the news portal’s freedom of expression” (Raimondi concurring opinion § 7). However, Delfi was found to be required to ‘post-monitor’ each of the comments posted on its site, and this can also lead to a disproportionate interference with its freedom of expression!

The systems put down in place by Delfi, the notice and take-and the automatic deletion of some vulgar words, indicate that it had not completely neglected its duties to avoid causing harm to third parties. They had failed, however, to prevent “manifest expressions of hatred and blatant threats to the physical integrity of L.” (§ 153). The Grand Chamber concluded that Member States could “impose liability on Internet news portals, without contravening Article 10 of the Convention, if they fail to take measures to remove clearly unlawful comments without delay” (§ 159).

Finally, the Grand Chamber found that the fine imposed on Delfi was not disproportionate to the breach, considering it is the largest news portal in Estonia. Also, Delfi did not have to change its business model after having been sued by L.

For all these reasons, the Grand Chamber found that imposing a liability on Delfi for comments posted on its portal was not a disproportionate restriction to its freedom of expression”(§ 162).

The Fuzzy Scope of this Case

The Court established the scope of this case to be about the “duties and responsibilities of Internet news portals, under Article 10 §2… when they provide for economic purposes a platform for user-generated comments on previously published content and some users … engage in clearly unlawful speech” (§ 115), but stated that the case was not about other forums on the Internet such as a discussion forum or a bulletin board “where users can freely set out their ideas on any topics without the discussion being channeled by any input from the forum’s manager; or a social media platform where the platform provider does not offer any content and where the content provider may be a private person running the website as a blog or a hobby (§ 115).

This is an important distinction. However, the Court was not clear enough in its explanation of which online forum is considered a publisher under Delfi. It seems that the Court attached great importance to Delfi benefiting financially from the number of comments posted on its site, including comments which may be defamatory. But a social media platform may also benefit from the numbers of users, whether or not they are actively engaged. For example, a person can visit Facebook or Twitter to read posts, but not post anything herself, but is still considered an as active user because she logged in, and this contributes to the financial value of the site. The same goes for blogs, which count the number of visits and clicks to gauge success, but not necessarily the number of comments. Of course, a social media site which users are not also actively posting and creating content is doomed to fail.

In their dissenting opinion, Judges Sajó and Tsotsoria wrote that requiring “active Internet intermediaries,” which they defined as hosts providing their own content and which open their intermediary services for third parties to comment on that content, to have constructive knowledge on content posted “is an invitation to self-censorship at its worst.” They also noted that “[g]overnments may not always be directly censoring expression, but by putting pressure and imposing liability on those who control the technological infrastructure (ISPs, etc.), they create an environment in which collateral or private-party censorship is the inevitable result.” For Judges Sajó and Tsotsoria, “Internet is more than a uniquely dangerous novelty. It is a sphere of robust public discourse with novel opportunities for enhanced democracy” (Sajó and Tsotsoria dissenting opinion p. 46).

Judges Sajó and Tsotsoria further noted that active intermediaries are now obliged to remove unlawful content “without delay” after publication and find it to be prior restraint. However, prior restraint is censorship before publication. Here, the Grand Chamber favored a system where comments can be freely published, but must be deleted by the site if found to be defamatory or hateful. As such, the Grand Chamber did not promote a notice and take down system, when it is the victim who puts the site on notice, but rather gave intermediaries a general responsibility to expeditiously censor any illegal speech published in their sites. Such a responsibility may give rise to a general corporate censorship over the Web, and significantly chill online speech. The Strasbourg Court took, indeed, an icy and slippery road.

(Image courtesy of Flickr user ThreeIfByBike pursuant to a Creative Commons CC BY-SA 2.0 license.)

Facebooktwitterredditpinterestlinkedinmailby feather