New Article: Regulating Freedom of Speech on Social Media: Comparing the EU and the US Approach

My article, Regulating Freedom of Speech on Social Media: Comparing the EU and the US Approach, was recently published by Stanford Law School. It is the second TTLF Working Papers I have published.

 

Here is the abstract of the article :

Social media platforms provide forums to share ideas, jokes, images, insults, and threats. These private companies form a contract with their users who agree in turn to respect the platform’s private rules, which evolve regularly and organically, reacting sometimes to a particular event, just as legislatures may do.

As these platforms have a global reach, yet are, for the most part, located in the United States, the articulation between the platforms’ terms of use and the laws of the states where the users are located varies greatly from country to country.

This article proposes to explore the often-tense relationships between the states, the platforms, and the users, whether their speech creates harm or they are a victim of such harm.

The first part of the article is a general presentation of freedom of expression law. This part does not attempt to be a comprehensive catalog of such laws around the world and is only a general presentation of the U.S. and the European Union laws protecting freedom of expression, using France as an example of a particular country in the European Union. While the principle is freedom of speech, the legal standard is set by international conventions, such as the United Nations Universal Declaration of Human Rights or the European Convention on Human Rights.

The second part of the article presents what the author believes to be the four main justifications for regulating free speech: protecting the public order, protecting the reputation of others, protecting morality, and advancing knowledge and truth. The protection of public order entails the protection of the flag or the king, and lèse- majesté sometimes survives even in a Republic. The safety of the economic market, which may dangerously sway if false information floats online, is another state concern, as is the personal safety of the public. Speech sometimes does harm, even kill, or place an individual in fear for her or his life. The reputation and honor of others is easily smeared on social media, whether by defamation, insults or hate speech, a category of speech not clearly defined by law, which yet is at the center of the debate on online content moderation, including whether there is a right to speak anonymously online. What is “morality” is another puzzling question, as blasphemy, indecency, even pornography, have different legal definitions around the world and private definitions by the platforms. Even truth is an elusive concept, and both states and platforms struggle to define what is “fake news,” and whether what is clearly false information, such as denying the existence of the Shoah, should be allowed to be published online. Indeed, while four justifications for regulating speech are delineated in this article, the speech and conduct which should be considered an attack on values worthy to be protected is not equally considered by the different states and the different platforms, and how the barriers to speech are being placed provides a telling picture of the state of democracy.

The third part examines who should have the power to delete speech on social media. States may exert censorship on the platforms or even on the pipes to block access to speech and punish, sometimes harshly, speakers daring to trespass the barriers to free speech erected by the states. For the sake of democracy, the integrity of the electoral process must not be threatened by false information, whether it spreads false information about the candidates or false information about alleged fraud, or even false information about the result of the vote.

Social media platforms must respect the law. In the United States, Section 230 of the Communications Decency Act of 1996 provides immunity to platforms for third-party content, but also for screening offensive content. Section 230 has been modified several times and many bills, from both sides of the political spectrum, aim at further reform. In the European Union, the E-commerce Directive similarly provides a safe harbor to social media platforms, but the law is likely to change soon, as the Digital Services Act proposal was published in December 2020. The platforms have their own rules, and may even soon have their own private courts, for example the recently created Facebook Oversight Board. However, other private actors may have a say on what can be published on social media, for instance employers or the governing bodies of regulated professions, such as judges or politicians. Even private users may censor the right of others to speak freely, using copyright laws, or may use public shaming to fear speakers into silence. Such fear may lead users to self-censor their speech, to the detriment of the marketplace of ideas, or they may choose to delete controversial messages. Public figures, however, may not have the right to delete social media posts or to block users.

The article was finished the last days of 2020, a year which saw attempts to use social media platforms to sway the U.S. elections by spreading false information, the semi- failed attempt of France to pass a law protecting social media users against hate speech, and false news about the deadly Covid-19 virus spreading online like wildfire, through malicious or naïve posts. A few days after the article was completed, the U.S. Capitol was attacked, on January 6, 2021, by a seditious mob seeking to overturn the results of the Presidential election, believing that the election had been rigged, a false information amplified by thousands of users on social media, including the then President of the United States. Several social media platforms responded by blocking the President’s social media accounts, either temporarily or permanently, as did Twitter.

 

 

Facebooktwitterredditpinterestlinkedinmailby feather

A few comments about the SAFE TECH Act aiming at reforming Section 230

U.S. Senators  Mark R. Warner (D-VA), Mazie Hirono (D-HI) and Amy Klobuchar (D-MN) introduced on February 5, 2021, the Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (SAFE TECH) Act which aims at reforming Section 230 of the Communications Decency Act (CDA)  to, according to the press release, “allow social media companies to be held accountable for enabling cyber-stalking, targeted harassment, and discrimination on their platforms.”

Section 230 immunity would extend to speech, including commercial speech

The bill modifies the “26 words that created the internet,” Section 230 (c)(1)(A) of the CDA, striking down “information” and replacing it by “speech.” The reason for that change is not clear, as information is necessarily speech and “speech” is a broader concept  than “information” as it may encompass expressive conduct such as armband wearing or burning your draft card in public. What matters in Section 230, at least in its current version, is that a “provider or user of an interactive computer service” cannot be held accountable for third party content. This is why Twitter cannot be sued for defamatory speech. The move is probably an attempt to take “conduct” out of Section 230 immunity, but it can be argued that, by doing that, the bill extends the scope of the immunity, not reduces it.

In a case often mentioned by speakers in favor of Section 230 reform, the Supreme Court of Wisconsin dismissed in 2019, in Daniel v. Armlist, a complaint against a site who had sold a gun later used to perpetrate a mass shooting, killing four persons, including plaintiff’s mother. The Supreme Court of Wisconsin Court reversed the decision of the court of appeals and affirmed the circuit court’s dismissal of Plaintiff’s complaint as barred by Section 230(c)(1), because Plaintiff’s claim for relief required the arms-selling platform to be treated as the publisher or speaker of information posted by third parties on its site. The Wisconsin Supreme Court considered Armslist to be an interactive computer service provider under Section 230, but also described it as “a firearm advertising website.” Advertising is commercial speech, and it thus seems that the change made by the bill would not affect similar cases. Professors Danielle Keats Citron and Mary Anne Franks wrote in 2020 that Section 230 defenders “presum[e] that the internet is primarily, if not exclusively, a medium of speech,” and that this “presumption… should be interrogated” because the Internet is now used for all sorts of activities, from making diner reservations to looking for a soul mate, not merely as a “speech-machine.” So why did the drafters change “information” to “speech”? Time (and upcoming hearings) may tell…

The bill also creates a Section 230 (c)(1)(B), which would give defendants the burden of proving, by a preponderance of the evidence, that they are not publisher or speaker of third-party speech. As such, the immunity would be an affirmative defense. A social media platform should prove that it is more likely to be true than false that they are not the publisher or speaker of a particular third party “speech,” not merely “information.”

This reminds me of another bill aimed at modifying Section 230, the EARN ACT, named so because Section 230 immunity must be “earned.” Senator Blumenthal stated in his March 11, 2020 ’s opening statement in the hearing of the Senate Committee on the Judiciary about the bill that “immunity is not a right, it should be earned or deserved and certainly should not be continuing if these companies fail to observe and follow the basic moral obligations that they have under the law, but it is a matter of simple morality.” At least, the SAFE TECH Act would require that platforms prove they are not a publisher or speaker, not their morality… However, this change means that litigation about speech published on a platform is likely to become more costly.

The bill would make it more difficult for small companies to enter the market

Senator Warner posted a FAQ page about the bill which affirms that concerns that the bill may “expos[e] small tech companies and startups to liability and increased litigation costs” and thus “drive them out of business and simply entrench the dominant players” are “gravely exaggerated.” Why? Because plaintiffs are less likely to sue knowing that small companies are less likely to be able to pay the damages. This argument is weak and does not consider that small companies and startups will consider this new risk as a liability and may thus decide not to enter the market or create a new service. Mark Weinstein, founder and CEO of the social media platform MeWe, wrote an opinion piece in the Wall Street Journal last month arguing that small platforms need the immunity of Section 230 to compete with the giant platforms.

Section 230 immunity would exclude commercial speech

It would appear at first glance that this extension of the scope of Section 230 (c)(1)(A) is an extension of providers and users of interactive computer services’ immunity, as the bill would extend the immunity for third-party content provided by Section 230 (c)(1)(A) of the CDA to speech. “Speech” is expression, not merely information, and encompasses commercial speech. While the First Amendment differentiates commercial speech from other types of speech, Section 230, as of now, does not differentiate between commercial and non-commercial speech.

However, should the bill become law, Section 230 (c )(1) immunity would no longer be afforded by the provider or user of an interactive computer service if they  have “accepted payment to make the speech available or, in whole or in part, created or funded the creation of the speech.” Twitter already offers a Promoted Tweets service, which allow advertisers to promote a particular tweet for a fee, to allow it “to reach a wider group of users or spark engagement from their existing followers.” Twitter is also considering offering subscription-based accounts which would offer premium services.

Social media platforms are not the only “providers or users of an interactive computer service” able to benefit from Section 230, and commercial platforms do to, with, however,  some exceptions. For instance, the Court of appeals for the Third Circuit recently held that Amazon is a “seller” under Pennsylvania law and thus the plaintiff’s negligence and strict liability claims against the platform were not barred by Section 230. However, plaintiff’s claims that Amazon failed to add a warning about the product, a dog’s leash which had severely injured plaintiff, was barred by Section 230.

According to Senator Warner’s press release, such a move is warranted so that Section 230 immunity no longer applies to ads or other paid content and thus  “ensuring that platforms cannot continue to profit as their services are used to target vulnerable consumers with ads enabling frauds and scams.” This is a rather heavy-handed way to address the issue of scams, as it has the potential of wiping out trusted business models, such as Etsy, a platform offering users a way to sell their products online, either homemade or “vintage,” who may be less keen to host sales of products they cannot thoroughly vet.

The bill was probably inspired by cases such as Herrick v. Grindr, where the Second Circuit Court of Appeals affirmed the Southern District Court of New York which had granted Grindr’s motion to dismiss.

However, the Second Circuit considered that “Herrick’s claims treat[ed] Grindr as the publisher or speaker of the offensive content,” even though “[a]t its core, § 230 bars lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions–such as deciding whether to publish, withdraw, postpone or alter content.” Content can be speech, or information, it does not matter.

The bill’s multiple carve-outs

Section 230, as updated by the SAFE TECH Act, would not bar plaintiffs to seek injunctive relief if they claim that an interactive computer service provider has failed to remove, or to restrict access, or prevent the dissemination “of material that is likely to cause irreparable harm.” Indeed, injunctive relief is a remedy used to prevent a damage which could cause irreparable harm. Courts consider several factors when deciding to grant such injunctions, such as the irreparable harm which may occur if the injunction is not granted, the likelihood of success on the merits, whether there are sufficiently serious questions going to the merits to make them fair grounds for litigation, and whether the balance of hardships tips decidedly in the movant’s favor.

Herrick v. Grindr may again have inspired the bill, which is alluded to in the press release. In this case, Plaintiff’s former boyfriend had allegedly created a false Grindr profile for Plaintiff, which led to Plaintiff being harassed by hundreds of men believing he had contacted them online to have sexual relations. United States District Judge Caproni from the Southern District of New York denied extending a Temporary Restraining Order, which is an injunctive relief, because,“[a]t this early stage, it appears likely to the Court that Section 230 bars many (if not all) of Plaintiff’s tort claim.” Plaintiff was seeking Grindr to be ordered to “immediately disable all impersonating profiles created under Plaintiff’s name or with identifying information relating to Plaintiff, Plaintiff’s photograph, address, phone number, email account or place of work.”

No effect on civil rights laws

The bill would have no effect on civil rights laws, antitrust laws, stalking or harassment or intimidation laws,  international human rights laws and wrongful death actions. The FAQ page explains that the SAFE TECH bill aims at finding a solution to protect victims of online abuse, particularly if they belong to minorities and other marginalized groups. The FAQ page also notes that “online harms spread to the real world,” in Charlottesville, Kenosha and in the U.S. Capitol, and that the bill would give victims “an opportunity to hold platforms accountable when their deliberate inaction or product design decisions produce real-world harm.”

The press release cited Professor Olivier Sylvain regarding companies provided a “free pass for enabling discriminatory conduct” by Section 230. Professor Sylvain argued in an article published in 2020 that “online intermediaries were avowedly laissez faire about user-generated content” from around 1995 to 2010, but that now “only a handful of firms control the ways in which the vast majority of information flows to  users around the world.” The SAFE TECH appears to have only these giant companies in mind. For Professor Sylvain, the language of Section 230 (c )(1) “was meant to head off judicial application of defamation law to online platforms.” However, the “blanket immunity” Section 230 provided has given them “license to disregard expressive conduct that perpetuates or deepens harms for which policymakers have drafted legal protections.” This view influenced the drafter of the bill and the press release explains that the law, as modified, will not “impair enforcement of civil rights laws [thus] maintaining the vital and hard-fought protections from discrimination even when activities or services are mediated by internet platforms.”

The bill also carves out an exception for suits under the Alien Tort Claims Act. The goal, as explained in the press release, is to allow “victims of platform-enabled human rights violations abroad (like the survivors of the Rohingya genocide) to seek redress in U.S. courts against U.S.-based platforms.” As such foreigners would still be able to sue social media platforms, claiming, for instance, violation of international law. Facebook recognized in 2018 that its platform had been used by the Myanmar military to coordinate “inauthentic behavior,” referring thus to the Rohingya genocide. Facebook stated that its decision to remove these pages “ was based on the behavior of these actors rather than on the type of content they were posting,” thus acknowledging that online speech may lead to real world crime. As explained by Paul Mozur from The New York Times, “Myanmar military personnel… turned the social network into a tool for ethnic cleansing.”

 

Image (public domain) taken by Flickr user citytransportinfo 

Image (publicdomain ) taken by Flickr user John K Thorne

 

 

Facebooktwitterredditpinterestlinkedinmailby feather

New York State May Soon Protect Child Influencers

A bill recently introduced in the New York Assembly aims at broadening the scope of the laws governing child performers to include “children who participate in online videos that generate earnings.”

This would allow children performing in “influencer videos” to be considered child performers and thus be able to benefit from the protection of the New York law covering child performers, particularly N.Y. Lab. Law § 150 – 154A on employment and education of child performers.

The justification of the bill argues that:

The internet has created a world where anyone with a smartphone can become a producer of content and parents can easily upload videos of their children and instantly create an internet star. 

Unfortunately, these young online actors lack the protections granted to children working in film and television. Increasingly more parents and children are becoming content producers each day and no regulations are in place to protect children against exploitation.”

Expansion of what is “artistic or creative services” under the law

The bill proposes to modify N.Y. Lab. Law § 150(1) – Definitions to include “influencer” in the list of what is “artistic or creative services” under the law. Such services would include:

participation in a video that is posted to a video-sharing social networking internet website which generates earnings from sponsors or by other means, based on the number of views of such video, based on the number of clicks on a link leading to such video.”

A child influencer would be a child performer

The bill would change the definition of a “child performer” under N.Y. Lab. Law § 150 to include a child who:

agrees to render artistic or creative services [as defined by section 150 of the labor code law] where such artistic or creative service were recorded in the state of New York or uploaded to a sharing and/or social networking internet website from within the state of New York.”

The bill would change the definition of a “child performer’s employer ” under N.Y. Lab. Law § 150 to include persons employing a “child performer” to furnish “artistic or creative services,” and “video-sharing and/or social networking internet website[s] that generat[e] earnings from videos qualifying as artistic or creative services by a child performer.”

New York Child Performer Law

“Child influencers” would thus be considered by New York law as child performers and would thus benefit from the protection of the law for child performers.

New York child performer law protects the interests of child performers. Under Article 7 of the New York State Estates, Powers, and Trusts Law (N.Y. Est. Powers and Trusts Law § 7-7.1 – Child performer trust account), the custodian and guardian of a child performer must establish a child performer account withing fifteen days of the commencement of employment, if such account has not been previously established.

Such accounts are sometimes referred to “Coogan Accounts,” after Jackie Coogan from The Kid fame, whose fate, losing the millions of dollars earned as a child performer through parental mismanagement, led to the passing of the California Child Actor’s Bill, the “Coogan Act,” which inspired the New York law.

Under New York law, the custodian of the trust account must then “promptly” notify the child performer’s employer of the existence of the account. The employer must transfer fifteen percent of gross earnings  to this account “within thirty days following the final day of employment.” The parent, legal guardian or custodian can require than more than fifteen percent of the gross earnings should be transferred to the trust account. If the balance of the account reaches two hundred fifty thousand dollars or more, a trust company must be appointed as custodian of the account.

Such thresholds may be easily reached for some child influencers with a high earning power: two minors toped Forbes’ list of Highest-Paid YouTube Stars of 2019, Anastasia Radzinskaya placing number 3 with an estimated $18 million earnings and Ryan Kaji taking first place with an estimated earnings of $26 million. Ryan Kaji became popular on YouTube and can be seen reviewing toys and “unboxing” toys. He earned $29.5 million in 2020, through ad revenue alone, but earned much more, $200 million, through derivative product deals, such as toys and clothes bearing his name.

This issue has already been addressed in France, and the New York bill is likely to inspire similar state bills.

 

Facebooktwitterredditpinterestlinkedinmailby feather

Pour les Enfants : A New French Law Regulates Influencers Who Are Under the Age of 16

Children influence spending within their households. Viacom conducted a national study in 2017 which found that three out of four parents said their children influenced their purchase decisions. Among these decisions is whether to buy a particular toy. The toy market is expanding with the pandemic: a NPD report found it to be worth 251 billion in 2020, up 19% from 2019, as parents wish to keep their children entertained (especially during work-related Zoom calls…) In France, the toy market is the fifth biggest one in the world.

Toy manufacturers have found in the last few years a novel way to promote their goods. “Influencer children” are filmed, often by their own parents, opening boxes of products sent by companies hoping to be featured in the videos, or which have entered a contract for the products to be featured. The videos are then uploaded on video-sharing platforms. Some YouTube channels featuring children influencers boast millions of views. Some of these channels generate advertising revenues in the millions of dollars.

In France, the Studio Bubble Tea channel is the project of a father featuring his two daughters, Kalys and Athéna, and has some 1.7 million subscribers. The videos show the two young girls opening packages   (unboxing videos), sampling fast food or sugary treats, and now even presenting derivative products featuring their likenesses, as they have become famous on their own. Another team of siblings, two boys, Néo and Swan, are starring in the YouTube The Voice – Neo & Swan channel, which has more than 5.3 million followers, also opening huge boxes of toys and other merchandise, visiting themes parks, doing challenges, such as the “Gummy Food v. Real Food Challenge.

 

Is this work, or just kids having fun?

The frequency of posting of these videos, several times a week, and their nature, often a scripted, or semi-scripted scenario, playing for a half-hour or more, made the French legislators wonder: was it work or play?

Before the introduction of the bill, French Representative Patrick Mignola had asked in June 2018 a written question to the Secretary of Labor to alert her about the absence of any regulation of the online work of these minors, stating that these activities should be considered work by the French labor Code, and asking the position of the government on the issue. The Secretary answered that she believed that such activities were considered by the law to be leisure, not work. She stated, however, that:

“[t]his phenomenon, both in terms of volume and of financial flows, now leads to questions about the qualification of “leisure activities” with regard to criteria, notably identified by case law, which characterize the employment relationship such as the obligation to take part in the activity, to follow unilaterally defined rules, guidance in the analysis of conduct or permanent availability, the possibility of sanctioning any breach of these obligations. All videos uploaded do not meet these criteria.”

The Secretary of Labor noted further that “ the “superposition” between the bond of subordination and parental authority must not serve to mask a possible work performance,” and agreed that the legal framework of this activity needed to be clarified.

 

French law considers three cumulative factors to assess the existence of a working relationship:  work performance, remuneration and relationship of subordination. The latter factor is most of the time lacking when assessing the activity of “kidfluencers,” as most of them perform at the direction of their parents. A report on a bill aiming at regulating the work of children influencers, authored by Representative Bruno Studer, published on February 5, 2020 (Studer Report), noted that:

many situations do not meet the combination of these three conditions. For example, the child filmed in the course of his or her daily life does not provide any service; some videos are not monetized; the child does not necessarily receive instructions or orders from the director-producer of the video. Thus, for the children concerned, the working time, the income generated, the morality of the content or even the respect of school obligations, dodge any legal framework.”

The Parliament passed the bill unanimously, which was then sent to the Senate. Senator Jean-Raymond Hugonet wrote in his report (the Hugonet report) that, “[w]hile some of these videos are clearly degrading, the majority of them are heartbreaking, even pathetic. They present children in different activities – unwrapping toys, scenes from everyday life, various challenges such as spending 24 hours in a closet or eating food of a certain color for 24 hours.”

The bill became law on October 19, 2020, regulating the commercial exploitation on platforms of children younger than 16 years old (Loi n° 2020-1266 du 19 octobre 2020 visant à encadrer l’exploitation commerciale de l’image d’enfants de moins de seize ans sur les plateformes en ligne) (the influencer law). The law will be implemented six months after its publication, was was on October 19, 2020.

The Studer Report mentioned the “sadly infamous cheese challenge,” also mentioned in the Hugonet report, where people produced videos showing them throwing slices of cheese in babies’ faces, as one of the reasons for passing the law. Such videos are not, however, posted on video channels dedicated to one or more child presenting products and services.

The Studer report painted a particularly dark picture of the use of children as influencers as it also mentioned a U.S. case where a mother who was featuring her children on her YouTube channel was charged with child abuse as the children were allegedly abused, including being undernourished and pepper sprayed for punishment. A third-party web site estimated the mother’s earnings from the channel at a minimum of six, maybe seven figures. However, these examples are thankfully the exception, not the rule, and the influencer law is not a child welfare protection law, even though the issue of whether child influencers may suffer ill psychological effects because of their media exposure was alluded to during the debates at the Parliament.

French child labor law and minors under 16

The new law modifies the labor Code to liken “kidfluencers” to children working in the entertainment industries or as fashion models so they too can benefit from ad hoc legal protection.

Under French labor law, minors aged sixteen at least may work, but children less than sixteen years old  cannot. Article L7124-1 of the French Labor Code carves out, however, exceptions for children engaged in entertainment activities, working on films, radio, or television programs, working as a fashion model, or, since the October 7, 2016 “law for a digital Republic,” children participating in video game competitions. In these cases, the law requires obtaining a prior individual administrative working authorization.

Article  L7124-9 of the French labor Code authorizes part of the minor’s earnings to be made available to the child’s legal representatives. However, “[t]he surplus, which constitutes the nest egg, is paid to the Caisse des Dépôts et Consignations and managed by this fund until the child reaches the age of majority. Direct debits may be authorized in emergency and exceptional cases.

Article  L7124-25 of the French labor Code punishes by a 3,750 Euro fine the giving ”directly or indirectly,” to the minor or their legal representatives, the funds which should have been placed in the child’s nest egg account.

What the influencer law changed

The influencer law broadened the scope of article L7124-1 to include children working for an employer “whose activity consists in making audiovisual recordings of which the main subject is a child under the age of sixteen, with a view to distribution for profit on a video-sharing platform service.” As worded, the law protects children performing on influencer videos, even if they do not have a work contract, which is most often the case, as the videos are often produced by the child’s own parents.

The law broadened the scope of the special legal status of children performers under sixteen under article L7124-1 2°) to participation in “sound recordings or audiovisual recordings, whatever their modes of communication to the public.” The law also created article L7124-1 5°, which places under the scope of the special status children working for an employer “whose activity consists in making audiovisual recordings of which the main subject is a child under the age of sixteen, with a view to distribution for profit on a video-sharing platform service,” thus alluding specifically to video posted on platforms.

The mandatory work authorization – Article 3- I

An authorization to feature a child below sixteen in a video shared online, if the child is “the main subject” of the video, must be obtained by the child’s legal representatives in the following cases:

  • When the cumulative duration or the number of these contents exceeds, over a given period of time, a threshold fixed by decree of the Council of State [this threshold has not yet been determined] or;
  • When the distribution of these contents provides an income to the director, producer or distributor of the video, whether this income is direct or indirect income, superior to a threshold fixed by decree of the Council of State [this threshold has not yet been determined].

These two cases are not cumulative. The Hugonet report noted that “the two conditions, even though close, do not necessarily merge. Parents may well choose not to monetize a video that “goes viral,” which does not rule out damage to the exposed minor. Likewise, some video content can generate income without necessarily having a large audience, through product placement, if the people filmed are deemed to be “influencers.”

The administrative authority makes recommendations to the legal representatives about:

  • The times, duration, hygiene and safety of the conditions for making the videos;
  • The risks, in particular psychological, associated with the dissemination of these videos;
  • The legal requirements about allowing normal school attendance;
  • The representative’s financial obligations incumbent on them .

The administrative authorization necessary to have children under sixteen perform in such videos is provided temporarily but can be renewed. The authorization can, however, be revoked at any time, and, in case of urgency, can even be suspended for a limited time. The influencer law does not define nor explain what these urgent matters are. While waiting for the decision to be issued, the authorization may be suspended.

Article 5 of the law added article 6-2 to the June 21, 2004 law on confidence in the digital economy, the French law regulating platforms and web-based activities, which provides that, if the administrative authority in charge of delivering the authorization finds out that a video featuring minors under sixteen has been posted on a platform without prior authorization, the administrative authority may refer the matter to a judge who can then “order any measure to prevent imminent damage or to put an end to a clearly unlawful disorder.” The use of the term “imminent” indicates that the procedure to be issued is the emergency procédure de référé, allowing the judge to issue a ruling rapidly, even within two days if following the exceptional procedure of référé d’heures à heures. The Hugonet report approved the introduction into the bill of the intervention of the judge as it “constitutes a strong guarantee… to respect the principle of freedom of expression.”

Article L7124-1 of the Code will require, when the influencer law enters into force, that, once the authorization to have a child perform in the video is obtained, the administrative authority must provide the child’s legal representatives “information relating to the protection of the rights of the child in the context of the production of these videos, which in particular covers on the consequences, on the child’s private life, of the dissemination of his image on a video-sharing platform.” The representatives must also be informed of financial obligations incumbent on them.

The financial obligations of the legal representatives & the marketing companies – Article 3-III & 3- IV

The Studer Report noted that it is the parents of a child performing in an influencer video who directly receive the revenue, and the child does not benefit from the protection of the French Labor Code provided to children performers. The salaries of children performers must be paid, until they turn eighteen, to a special escrow account at the Caisse des Dépôts et Consignations, which is a public financial institution.

Article 3 of the influencer law states that if the direct and indirect income derived from the distribution of the videos exceed, over a given period of time, a threshold set by decree in the Council of State, the income received from the date on which this threshold is exceeded must be paid without delay by the marketing company to the Caisse des Dépôts et Consignations which will then manage the income until the majority or emancipation of the child. A portion of the income, determined by the competent authority, can be left at the disposal of the child’s legal representatives.

A right to be forgotten – Article 6

The influencer law also addressed the issue of privacy. Its article 4-6° directs video-sharing platforms to adopt charters codes aiming at facilitating a minor’s right to be forgotten, which is provided to them by article 51 of the French data privacy law. These charters must inform the children using “clear and precise, easily understandable terms,” how this right can be implemented.

Article 6 of the influencer law specifies that parental authorization is not necessary for the child to exercise this right. Article 7 of the law directs the government to provide the Parliament, within six months of the publication of the law, that is, April 19, 2021, at the latest, a report evaluating the reinforcing of the protection of the personal data of minors since the implementation in France of the GDPR.

As explained by the Studer report, a minors’ right to be forgotten belongs to the holder of parental authority. However, in the case of a child influencer, “there are many situations in which parents are responsible for distributing content showing their children and find it beneficial, particularly from a  financial point of view, to keep this content online.”

The new duties of the marketing companies under the new law – Article 3-IV – how to pay the child

Article 3-IV of the law provides that advertisers placing a product “in an audiovisual program broadcast on a video-sharing platform whose main subject is a child under the age of sixteen” must verify with the person responsible for the broadcast if the monies due for the product placement must be paid to the Caisse des Dépôts et Consignations. If this is the case, the marketing company must pay the child salary directly to the Caisse des Dépôts et Consignations, minus the part of the salary which may be kept by the legal representatives under the law. Failure to comply with the obligation to pay to the Caisse des Dépôts et Consignations is punished by a 3,750 Euro fine.

The new duties of the marketing companies under the new law – Article 4 – adopting charters

Article 4 of the law gives marketing companies the duty to adopt charters which must aim inter alia:

  • To promote information to users on the laws and regulations applicable to the dissemination of the image of children under sixteen through video-sharing services and on the risks, particularly the psychological risks, associated with the dissemination of these images;
  • To promote information and awareness of minors less then sixteen years old, with the help of non-profit child protection organizations, about the consequences of the dissemination of their image on a video-sharing platform of their private lives, the psychological and legal risks of such use and the recourses they may use “to protect their rights, dignity and moral and physical integrity;
  • To promote users reporting of content featuring children under the age of sixteen which would infringe upon the dignity, moral or physical integrity, of these children;
  • To take “all useful measures to prevent processing, for commercial purposes, such as commercial solicitation, profiling, and behavioral advertising, of the personal data of minors that would be collected by their services through the posting by a user of audiovisual content in which a minor appears;”
  • To improve, in connection with child protection nonprofit organizations, the detection of situations in which the production or distribution of such content would violate the dignity or the moral or physical integrity of minors under the age of sixteen years that they include;
  • To facilitate the minors’ implementation of their right to be forgotten, as provided to them by article 51 of the French data processing law, the law n ° 78-17 of January 6, 1978. Such information must be provided “in clear and precise terms, easily understandable by [the minors] of the methods of implementing this right.”

Article 5 of the law states that the French communications agency, the Conseil supérieur de l’audiovisuel (CSA), the French public authority for audiovisual regulation, “will publish periodic reports on the implementation and effectiveness of these charters.” The Hugonet report noted that “[t]his “soft law” approach is in fact the only alternative, in an online world where more direct regulation is legally impossible, not to mention the possible infringements of freedom of expression.” It is not clear whether the charters will be public or internal best practice documents. The CSA reports should, however, be made public and thus provide practitioners with useful information.

 

 

Facebooktwitterredditpinterestlinkedinmailby feather

Copyright Law Update: the Protect Lawful Streaming Act & the CASE Act

The Protect Lawful Streaming Act was passed on December 2020, as part of the Covid-19 stimulus omnibus bill, the Consolidated  Appropriations Act, 2021.

The Act adds a section 2319C to the Copyright Act on digital transmission services, defined as services having as a primary  purpose  to publicly perform works by digital transmission. The Act prohibits providing such services commercially or for financial gain without the authorization of the  copyright owner.

The penalties are a fine and imprisonment for not more than three years, or both, or imprisonment for not more than five years, or both, if the offense “was  committed  in  connection with  one or more  works  being  prepared  for  commercial public performance; and … the   person   knew   or   should   have   known  that  the  work  was  being  prepared  for  commercial public performance.” The penalty can be a fine and 10 years in prison if the offense is a  second or subsequent offense under section 2319C  or  Section 2319(a).

The ‘‘Copyright  Alternative  in  Small-Claims  Enforcement  Act  of 2020’’ or ‘‘CASE Act of 2020’’ was also passed as part of the omnibus bill. It creates Chapter 15 of the Copyright Act and establishes a Copyright Claims Board in the Copyright Office (CCB)), which much be established within one year of the enactment of the Act.

Composition of the Copyright Claims Board

The CCB, which will be an administrative tribunal for matters related to copyright,  will be composed of three full-time Copyright Claims Officers appointed by the Librarian  of Congress, after consultation with the Register of Copyrights, who must be attorneys with a least seven years of experience, and who must “have substantial familiarity with copyright law  and  experience in the field  of  alternative dispute resolution, including  the resolution of litigation matters  through that method  of  resolution.” The three Officers will be appointed, respectively, to a four-year term, to a five-year term, or a six-year term.

The Register of  Copyrights must also hire at least two full-time Copyright Claims Attorneys  “to  assist  in  the administration of the Copyright Claims Board,” which must have “substantial  experience in the  evaluation,  litigation, or adjudication of copyright infringement claims” and no fewer than three years  of  “substantial experience in copyright law.” They will be appointed to a five-year term.

Both Copyright Claims Officers and Copyright Claims Attorneys will have to recuse themselves if there they have reason to believe there is a conflict of interest. Parties will have to refrain from ex parte communication with the Copyright Claims Officers and the Copyright Claims Attorneys.

Participation in CCB proceedings is voluntary, but respondents need to opt-out within 60 days

Participation in a CCB  proceeding is voluntary, and the parties retain their right to pursue their claim in a court of law “or any  other  forum.” However, if a respondent fails to opt out within a 60-day period after having been served by the claimant, she or he “loses the opportunity to have the dispute decided  by  a court  created  under  article  III of the U.S. Constitution and waives the  right  to  a  jury  trial  regarding the dispute.”

The respondent will be able to file a counterclaim.

Scope of jurisdiction

The CCB will have power to adjudicate copyright infringement claims if the claim does not exceed $30,000. The CCB will also be able to adjudicate claims for a declaration of noninfringement of a copyright, and claims of section 512(f)  DMCA misrepresentations, that is, a misrepresentation in the DMCA takedown notice of the infringing nature of a material or activity, or that a “material or activity was removed or disabled by mistake or misidentification.”

Applicable laws

The CCB will follow the regulations  established by the Register of Copyrights under the new Chapter 15 and other “relevant  principles  of  law  under this title.” In case of conflict with “a judicial precedent on an issue of substantive copyright  law  that cannot  be  reconciled,” then the CCB  will “follow the law of the  Federal  jurisdiction  in  which  the  action  could  have  been  brought  if  filed in a  district court  of  the United  States,  or, if  the  action  could have been brought  in more than one such  jurisdiction, the  jurisdiction that  the  [CCB]  determines has the most significant ties  to the  parties and conduct at issue.” It thus appears that the parties will still being able to benefit from the positions and tests used by the Courts in their respective Circuits.

Legal representation

The proceedings will take place at the CCB offices, and the parties will not have to be present. The procedure will be written, but the CCB will be able to hear the parties or their counsels  via “internet-based applications and  other telecommunications  facilities,  except  that,  in  cases  in which  physical  or  other  nontestimonial  evidence  material  to  a proceeding” cannot be thus provided to the CCB, in which case the CCB “may make  alternative arrangements for the submission of  such evidence that do  not  prejudice any other  party to  the proceeding.”

The parties can be represented by an attorney or by a law student “who is  qualified under applicable law  governing  representation  by  law  students  of  parties  in  legal proceedings  and who  provides  such representation on a pro bono basis.”

Procedure – the claim

Parties will initiate the proceeding by filing a claim with the CCB  which will include a certified statement  of  material  facts  in support of the claim and a filing fee. The amount of the fee will be established by the Copyright Office, which indicated in its Small Claims Report (see note p. 140)  that “[a] conforming amendment to section 708 may be advisable.”

The claim will be then reviewed by a Copyright  Claims  Attorney to  ensure  that  the claim  is compliant with Chapter 15 and other applicable  regulations. The statute of limitations is three years after the claims accrued.

If the claim is compliant, the claimant is notified and can then serve it  to the defendant. If it does not comply, the claimant will be permitted to file an amended claim withing thirty days after receiving the notice that the claim does not comply and will have to file an additional filing fee.  If the claim is still not compliant, the claimant  will be given a second opportunity to amend the  claim not  later than thirty  days  after the date of the second notice of non-compliance, without having, however, to pay another filing fee. If the claim is still not complaint, the proceeding will be dismissed without prejudice.

If the claim is against an  online service  provider storing infringing material or linking to it, the claim will have to state that the claimant has previously notified the service provider by a DMCA notice, and that the  service provider has failed  to expeditiously remove or  disable  access to  the  material  after receiving such notice. The opposing parties can file a counter claim, which compliance is similarly reviewed by a Copyright  Claims  Attorney.

All claims can be dismissed without prejudice for unsuitability by the CCB  if the CCB concludes that a necessary party or an essential witness has not been contacted, or if evidence or expert testimony is lacking, or if the claim involves an issue exceeding the competence scope of the CCB, or if the claimant becomes a member  of a class action.  A claim can also be voluntary dismissed, with no prejudice, by the claimant.

Even if a claim is compliant, the parties can choose any time to settle the case, either during a conference with a Copyright Claim Officer or by submitting a settlement proposal to the CCB.

If the claim proceeds, the CCB will have the power to make factual determinations, based upon a preponderance of the evidence.

If the respondent to the claim does not appear, or failed to pursue defending the claim, the CCB can issue its decision by default.

Permissible remedies

The CCB may award actual damages and profit. It can also award statutory damages, which cannot exceed $15,000 for each work infringed. Statutory damages cannot exceed $7,500 per work infringed or a total of $15,000 in any proceeding.

Effect of a CCB determination

If the CCB issues a final determination, including a default determination, it will preclude, solely with respect to the parties to such determination, the case to be litigated again before any court or tribunal, or before the CCB,  of  the  claims  and  counterclaims  asserted  and  determined  by  the  CCB. However, parties will be able to litigate again, in a court of law or with the CCB, about the same facts, about claims or counterclaims which were either not previously asserted or which had not been determined by the CCB. Also, if the CCB determined ownership of a copyright for the purposes of resolving the case, such determination  will not have any  preclusive  effect in any other action or  proceeding before  a court of law or a tribunal , including the CCB.

No legal  precedent

Decisions by the CCB will not, however, be relied upon as legal precedent, and thus a CCB caselaw will not stem from the adjudicated claims. Following the cases is, however, likely to be enlightening, especially since a Copyright Claim Officer dissenting with a CCB decision will be able to file her or his dissent. This is particularly interesting as all final  determinations of the CCB will be published on a website. Parties will be able to submit a written  request for reconsideration or amendment  of a final determination made by the CCB, if done no later than thirty days after the issuance of the final  determination, “if the party identifies a clear error of law or fact material to the outcome, or a technical mistake.”

Possible negative effect of the CCB

While the CCB will make it easier and less costly for copyright holders to defend their rights, and this is certainly a positive effect, it may also lead to frivolous claims be filed, and so called “copyright-trolls” who engage in mass copyright litigation may find it an efficient tool to pursue their trade.

Positive effect of the CCB

Artists, writers, small businesses, and entrepreneurs will be able to defend their rights at a cost much lower than the costs of litigation in a federal court (the court which has jurisdiction over copyright cases).  The Register of Copyrights will also establish regulations for claims seeking  less than $5,000 (exclusive  of  attorneys’ fees and  costs). In these cases,  at least one Copyright Claim Officer will determine the outcome of the case and the determination will have the same effect as one issued by the entire CCB.

Image is courtesy of Flickr user Alan Levine under a CC BY 2.0 license.

 

Facebooktwitterredditpinterestlinkedinmailby feather

Oh, the Places You’ll Go! (Or Maybe Not)

The US Court of Appeals for the Ninth Circuit held on December 18, 2020, that the Oh, the Places You’ll Go! comic book infringed the rights of the owner of the Oh, the Places You’ll Go! Dr Seuss book as it is not fair use.

The case is  Dr. Seuss Enterprises, L.P. v. Comic Mix LLC.

Appellant is Dr. Seuss Enterprises (DSE) which holds the copyright to the works of Theodor S. Geisel, aka “Dr. Seuss,” including his final book, Oh, the Places You’ll Go!  (Go!) and runs a product licensing and merchandising program. The Ninth Circuit noted that “Dr. Seuss” was the top licensed book brand in 2017, and a popular choice as graduation present.

Appellee Comic Mix developed and published the Oh, the Places You’ll Boldly Go! book (Boldly) which is a mash-up of the Dr. Seuss book and Star Trek, using elements from each of works. “Mash-up” refers to a work created from combining elements of others works but is not a legal term. If a mash-up uses elements protected by copyright, it may not be infringing if it is fair use. Comix Mix did not seek to obtain a license from DSE, nor did it obtain authorization to use elements of Go!.

DSE considered the mash-up to be infringing and sent several letters to Defendants. It also sent a DMCA takedown notice to Kickstarter, which then took down the fundraising page and blocked the pledged funds. DSE then filed a copyright and trademark infringement suit. Comic Mix had sought to fund the publication of the book by a Kickstart crowdfunding campaign.

Comic Mix moved to dismiss the trademark infringement claims arguing that use of the trademarks in the title, artistic style or fonts of the mash-up book is protected by the First Amendment under the Rogers v. Grimaldi test, which requires judges to examine if (1) the mark has artistic relevance and (2) if so, if use of the work is explicitly misleading. The District Court dismissed Plaintiff’s trademark claim, finding that that use was nominative fair use.

Comix Mix also argued that the mash-up was a parody and thus fair use. While a parody is fair use, fair use can be found by the courts even if the derivative work is not a parody. As defined by the Supreme Court in its 1994 Campbell case, a parody uses elements of the original work to comment, at least in part, on the author of the work. Fair use, in contrast, is determined by examining four fair use factors. The District Court did not find the mash-up to be a parody, but granted summary judgment, because the mash-up was fair use.

DSE appealed and a panel of Ninth Circuit held an hearing on April 27, 2020. DSE’s counsel argued that Boldly! is a “market substitute” for Go! and that it “would compete head-to-head in the graduation gift market.” Comic Mix’s counsel argued it was a parody. .

On December 18, 2020,  the panel reversed the US District Court’s grant of summary judgment, finding that the use was not fair use.

The Four fair Use Factors

The first fair use factor, the purpose and character of the use, weighed against fair use. The use is commercial, and the mash-up is not a parody, as it does not critique or ridicule Dr. Seuss’s works. The panel did not find use of the original work to be otherwise transformative either, but that Boldly! “merely repackaged Go!.” There was no new purpose or character, according to the Ninth Circuit but Boldly! merely recontextualized the original expression. It “[did] not alter Go! with new expression, meaning or message” either, noting that “the world of Go! remains intact, ” and that the derivative work  “was merely repackaged into a new format, ” noting further that the “Seussian world… is otherwise unchanged.”

The second fair use factor, the nature of the work, also weighed against fair use, as Go! is a creative and expressive work.

The third fair use factor, the amount and substantiality of the use, weighed “decisively” against fair use, as both quantitative and qualitative use were substantial. The Ninth Circuit found the copying to be “considerable,” around 60% of the Dr. Seuss’s book, including illustrations, and “took the heart of Dr. Seuss’s works, ”giving as example the use of the “highly imaginative and intricately drawn machine that can take the star-shaped status-symbol on and off the bellies of the Sneetches,” from the Sneetches book.

The fourth and final factor, the effect of the use on the potential market, also weighted against fair use. It is the proponent of the affirmative defense of fair use who has the burden of proof, and the Court did not find that Comic Mix had not proven there was no potential market harm. Comic Mix tried unsuccessfully to argue that fair use is not an affirmative defense and that it was thus DSE which had to prove potential market harm. Counsel for DSE argued during the April 2020 hearing that the District Court had incorrectly place the burden of proof of the fourth factor on DSE, a decision he found to be “inconsistent” with Campbell.

The Ninth Circuit noted that Comic Mix had “intentionally targeted and aimed to capitalize on the same graduation market as Go!” and that it had planned to release Bold!in time for school graduations,” and that the unauthorized derivative work curtailed Go! ‘s potential derivative market, noting further that DSE “[had] already vetted and authorized multiple derivatives of Go! ”. DSE’s counsel reminded the panel during the April 2020 debate that the Supreme Court had emphasized in Campbell that licensing of derivatives is an important incentive to creation.[

Comic Mix’s counsel argued, curiously, that DSE did not have the right to control the “fair use market for transformative work,” but acknowledged that DSE was entitled to make transformative works. Indeed, this right is provided to DSE by Section 106 (2) of the Copyright Act. The Ninth Circuit noted that DSE “certainly has the right to “the artistic decision not to saturate those markets with variations of their original,” citing Castle Rock Ent., a case where the Second Circuit Court of Appeals stated that  even though the owner of the copyright of the Seinfeld television series “[had] evidenced little if any interest in exploiting this market for derivative works based on Seinfeld, such as by creating and publishing Seinfeld trivia books… the copyright law must respect that creative and economic choice.”

 

Image is courtesy of Flickr user Sarah B Brooks under a CC BY 2.0 license.

Facebooktwitterredditpinterestlinkedinmailby feather

Oh, What a Case (9th Circ. 2020): Works Presented as Factual are Factual when Determining Scope of Copyright Protection

The U.S. Court of appeals for the Ninth Circuit held on September 8, 2020, in Corbello v. Valli, that the musical Jersey Boys did not infringe plaintiff’s copyright in an autobiography of Tommy DeVito ghost written by Rex Woodward, as it had not copied any protectable elements of the book.

The case is interesting because the Court applied its newly adopted “Asserted Truths” doctrine, holding that an author representing a work as nonfiction cannot later claim that it was fictionalized and thus entitled to full copyright protection.

The facts

Tommy de Vito is one of the founding members of the Four Seasons, with Frankie Valli, Bob Gaudio and Nick Massi. The group produced several hits, Sherry, Big Girls Don’t Cry, Walk Like a Man and December, 1963 (Oh, What a Night) andwasinducted into the Rock and Roll Hall of Fame in 1990.

Rex Woodard ghostwrote Tommy DeVito’s autobiography in the late Eighties (the Work), using taped interviews of the musician and even portions of the F.B.I. file on the Four Seasons obtained under the Freedom of Information Act. The two men, however, did not find a publisher for the book.

Tommy DeVito executed an agreement in 1999 with Frankie Valli and Bob Gaudio, granting them the exclusive rights to his “biography” for the purpose of creating a musical based on the life and music of the Four Seasons. The rights were to revert to DeVito should Valli and Gaudio not exercise their rights within a defined period. In 2004, Valli and Gaudio granted the right to use the name and music of the band, the name and likeness of the musicians, and the story of their lives, to the producers of an upcoming show about the Four Seasons.

DeVito provided access to his unpublished autobiography to the writers of the show, which became the Jersey Boys musical (the Play). It ran on Broadway from 2005 to 2017 and was adapted into a movie in 2014. The musical and the movie tells the story of the four members of the Four Seasons.

Donna Corbello, Woodward’s surviving wife, tried again unsuccessfully to publish the book written by her husband after the show started to run, believing that its success might help sell the autobiography to a publisher.

She discovered then that DeVito had registered the copyright of the Work as sole author and she then filed a supplementary application with the U.S. Copyright Office to add her late husband as a coauthor and co-claimant of the Work. The certificate of registration was amended to list Woodward and DeVito as coauthors and co-claimants of the Work.

The (long) procedure

Corbello then sued DeVito for breach of contract and equitable accounting for the Work’s profits, later adding as defendants the producers of Jersey Boys and Valli and Gaudio, after learning that DeVito provided access to the book, and also sued for copyright infringement. Corbello claimed that the Play was a derivative work of the Book, owned exclusively by the co-authors and thus herself, as lawful successor of her husband.

The U.S. District Court of Nevada issued a summary judgment in 2011, declaring the book a joint work, “because of DeVito’s non-de minimis creative edits.” The Court reviewed the 1999 agreement, found it to be the grant of an exclusive license, which had lapsed, but not a transfer of copyright. Woodard was a co-owner, Corbello a successor in interest.

A panel of the U.S. Court of appeals for the Ninth Circuit reversed in part in 2015. Judge Sack noted in his concurring opinion that the matter would be greatly simplified if the district court would decide on remand that the work is not infringing. But the case nevertheless proceeded to trial after the District had only partially granted summary judgment on remand, holding that, while there was substantial similarity sufficient to avoid summary judgment at least with a thin copyright protection, most of the similarities were based on historical facts. The jury found in favor of Plaintiff. The District Court granted a motion for new trial, which was appealed. The 9th Circuit then reviewed the case de novo.

The Ninth Circuit copyright infringement test

The Ninth Circuit’s substantial-similarity test contains an extrinsic and intrinsic component.

The extrinsic test requires a three-step analysis: (1) identifying similarities between the copyrighted work and the accused work, (2) disregarding similarities based on unprotectable material or authorized use; and (3) determining the scope of protection (“thick” or “thin”) to which the remainder is entitled “as a whole.”

The intrinsic test is conducted only if the extrinsic analysis succeeds. It examines an ordinary person’s subjective impressions of the similarities between two works.

In our case, the Court did not apply the intrinsic test because the extrinsic test failed. The Court applied the extrinsic test to elements of the Work which were “undisputedly factual”. The introduction of Tommy de Vito is about a historical character, the introduction of the Song Sherry is a historical fact, as are the introduction of the songs Big Girls Don’t Cry and Dawn, and as is the description of the induction into the Rock and Roll Hall of Fame. As for comparing the Four seasons and the Beatles, these were unprotectable ordinary phrases. These elements were therefore not protectable.

The new asserted facts doctrine

The Court then applied the extrinsic test to the claimed fictions represented to be facts and presented its new asserted-truth doctrine, stemming from the doctrine of copyright estoppel, under which once a plaintiff’s work has been held out to the public as factual, the author-plaintiff cannot claim that the book is actually fiction and thus entitled to the higher protection allowed by fictional works.

The Ninth Circuit did not believe that copyright estoppel is the right term for the doctrine and named it instead the “asserted truths” doctrine, citing Houts v. Universal City Studios:

“Estoppel” is not, in our view, an apt descriptor for the doctrine at work here.  For one thing, … detrimental reliance is not an element of this doctrine, as “the [so- called] estoppel [is] created solely by plaintiff’s affirmative action and representation that the work was factual.”. For another, application of estoppel concepts often suggests that the party against whom estoppel is applied is in some way culpable….

“Rather than “copyright estoppel,” we will refer to this rule of copyright law as the “asserted truths” doctrine, because it is the author’s assertions within and concerning the work that the account contained in the book is truthful that trigger its application.”

In our case, the Work was presented as a reliable source of factual information about the Four Seasons, even presented as a “complete and truthful chronicle of the Four Seasons.” The Court noted that DeVito had provided a copy of it to Play’s writers when they were researching the history of the Four Seasons, and they viewed it as a factual source “even better than newspaper or magazine articles, because it was co-written by a participant in the events described.”

The Court specified that “the asserted truths doctrine applies not only to the narrative but also to dialogue reproduced in a historical nonfiction work represented to be entirely truthful” and “includes dialogue that an author has explicitly represented as being fully accurate, even if the author was unlikely to have recalled or been able to report the quotations exactly.”

Authors of biographies should thus be well advised to add a disclaimer to their work, claiming that the dialogues, while based on historical facts, are the fruits of the author‘s imagination.

 

This post was first published on the TTLF Newsletter on Transatlantic Antitrust and IPR Developments, Stanford-Vienna Transatlantic Technology Law Forum

Image courtesy of Flickr user Andy Roberts under a CC BY 2.0 license.

 

Facebooktwitterredditpinterestlinkedinmailby feather

All the President’s Tweets… and Section 230 of the CDA

On May 28, 2020, President Donald Trump issued the Executive Order on Preventing Online Censorship (EO) directing the Secretary of Commerce, in consultation with the Attorney General, to request that the Federal Communications Commission “expeditiously propose” regulations to clarify when a provider of an interactive computer service screening offensive content under Section 230 (c)(2)(A) of the Communications Decency Act (CDA) would not be able to benefit from the Good Samaritan provision of the CDA.

Twitter called the EO “a reactionary and politicized approach to a landmark law.” An executive order is not a law, and Congress cannot overturn the order. However, the EO requires a clarification of the law. Law are clarified and interpreted by the courts; they are not clarified by government agencies.

The CDA is an essential law of the web

The CDA is an important federal law, as, without it, the web as we know it would not be able to function, as intermediaries would be constantly held liable for torts, such as defamation, and would have to defend themselves in courts.

The law was passed by Congress after a New York court held in Stratton Oakmont, Inc., v. Prodigy Servs. that the operator of a computer bulletin board, where a third party had posted defamatory allegations, was a publisher. Congress explained in 1996 that “[one] of the specific purposes of [section 230] is to overrule Stratton-Oakmont v. Prodigy and any other similar decisions which have treated such providers and users as publishers or speakers of content that is not their own because  they have restricted access to objectionable  material.”

What led to this EO

The EO was signed following Twitter’s decision to add a civic integrity notice to two tweets posted by the President on May 26, 2020 which alleged mail-in ballot fraud in California (see here and here). The warning is a link reading “! Get the facts about mail-in ballots” and leads to a page offering a counter view. Twitter explained that it had done so “as part of our efforts to enforce our civic integrity policy. We believe those Tweets could confuse voters about what they need to do to receive a ballot and participate in the election process.”

President Trump reacted to this move on Twitter, posting “….Twitter is completely stifling FREE SPEECH, and I, as President, will not allow it to happen!

The President is taking the view that his freedom of speech has been abridged by Twitter, a private company. The First Amendment of the Constitution does not generally protect freedom of speech, but instead prevents Congress from passing laws abridging freedom of speech. This means private companies may choose to set their own policies regulating speech, if they do not violate laws (this would be the case, for instance, if a private policy violated the Civil Rights Act).

The EO states that:

Twitter now selectively decides to place a warning label on certain tweets in a manner that clearly reflects political bias.  As has been reported, Twitter seems never to have placed such a label on another politician’s tweet.  As recently as last week, Representative Adam Schiff was continuing to mislead his followers by peddling the long-disproved Russian Collusion Hoax, and Twitter did not flag those tweets.  Unsurprisingly, its officer in charge of so-called ‘Site Integrity’ has flaunted his political bias in his own tweets.

The EO appears to be less an official order than a personal message from the President lashing out at Twitter, at one of its employees in charge of Site Integrity, who has been named in another of the  President’s tweets, and even at Representative Adam Schiff, the lead impeachment manager in the case which led to the impeachment of the President.

The Good Samaritan provisions of the CDA

Section 230 (c)(1) of the CDA created a safe harbor for providers and users of an interactive computer service for blocking and screening offensive material.

The CDA definition of “interactive computer service” includes information service, system, or access software provider providing or enabling computer access by multiple users to a computer server. They are, for instance, webhosts, search engines, e-commerce, and, yes, social media platforms, such as Twitter.

They are immune because they only act as intermediaries of third-party content. As such, they cannot be held liable for content posted through their services. They are intermediaries, not publishers.

The immunity for screening offensive content of the CDA

Section 230 (c)(2)(A) excludes provider or user of interactive computer services from civil liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

The EO argues:

It is the policy of the United States to ensure that, to the maximum extent permissible under the law, this provision is not distorted to provide liability protection for online platforms that — far from acting in “good faith” to remove objectionable content — instead engage in deceptive or pretextual actions (often contrary to their stated terms of service) to stifle viewpoints with which they disagree.

The argument is that platforms are taking advantage of their power to screen offensive content, even content protected by the First Amendment, to promote their point of view.

What does the EO aim to achieve?

The EO argues that:

In a country that has long cherished the freedom of expression, we cannot allow a limited number of online platforms to hand pick the speech that Americans may access and convey on the internet.  This practice is fundamentally un-American and anti-democratic.  When large, powerful social media companies censor opinions with which they disagree, they exercise a dangerous power.  They cease functioning as passive bulletin boards, and ought to be viewed and treated as content creators.

The argument is that social media platforms have now taken the role of a publisher and should no longer being able to be protected by Section 230 safe harbor.

The EO calls for the clarification of the scope of Section 230 immunity, arguing that “the immunity should not extend beyond its text and purpose to provide protection for those who purport to provide users a forum for free and open speech, but in reality use their power over a vital means of communication to engage in deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints.”

The scope of section 230 immunity is at stake

The EO further argues that:

When an interactive computer service provider removes or restricts access to content and its actions do not meet the criteria of subparagraph (c)(2)(A), it is engaged in editorial conduct.  It is the policy of the United States that such a provider should properly lose the limited liability shield of subparagraph (c)(2)(A) and be exposed to liability like any traditional editor and publisher that is not an online provider.

The “criteria” of Section 230 (c)(2)(A) is not clear. The Court of appeals for the Ninth Circuit recently noted in Enigma Software Group USA v. Malwarebytes, Inc. that the term “otherwise objectionable” is a “catchall” phrase, citing Judge Fisher’s concurring  opinion in Zango, Inc. v. Kaspersky Lab, Inc., and reviewed the legislative history of the CDA, as a law aiming at protecting minors from online pornography. The Ninth Circuit court recognized in Enigmathat interpreting the statute to give providers unbridled discretion to block online content would, as Judge Fisher warned, enable and potentially motivate internet-service providers to act for their own, and not the public, benefit.”

The EO argues that the purpose of Section 203 (c) is “narrow,” thus appearing to argue that the CDA’s goal was only to protect users against pornography. However, the Supreme Court held in 1997, in Reno v. ACLU , that two provisions of the CDA, one for imposing sanctions for knowingly transmitting obscene or indecent messages, the other for sending patently offensive material to minors, were unconstitutional as abridging freedom of speech.

Yet, the CDA played in vital role in the development of the web as we know it, including social media, even after being stripped from provisions which had, in essence, given it its name, the Communications Decency Act… Therefore, the scope of Section 230 (c)(2)(A) is likely broader than pornography (obscenity is not protected by the First Amendment, see Roth v. United States.)

Congressional statutory findings for the CDA stated that interactive computer services “offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity,” and appeared to have broader goals for passing the law. It is an essential law for the web as we know it to operate. While the EO aims at preventing online censorship, it would likely lead to constant censorship, to the point that the social media business model may be seriously impacted, while impairing the robust marketplace of ideas, which is ideally created by the First Amendment.

Do facts still exist?

The tweets which had warranted the Twitter warning read:

There is NO WAY (ZERO!) that Mail-In Ballots will be anything less than substantially fraudulent. Mail boxes will be robbed, ballots will be forged & even illegally printed out & fraudulently signed. The Governor of California is sending Ballots to millions of people, anyone….. living in the state, no matter who they are or how they got there, will get one. That will be followed up with professionals telling all of these people, many of whom have never even thought of voting before, how, and for whom, to vote. This will be a Rigged Election. No way!

These statements have not been substantiated and thus breached Twitter’s civic integrity policy, under which Twitter’s services cannot be used “for the purpose of manipulating or interfering in elections or other civic processes.”

Twitter’s notice page featured several tweets taking the view that the President’s voting fraud claims were unsubstantiated, included a tweet featuring a link to a press release from the office of California Governor Gavin Newsom about his executive order requiring mail-in ballots to be sent to each Californian registered voter for the November General Election.

However, even a statement coming from an official source may soon no longer be trustworthy. On May 29, a tweet from the White House account was flagged by Twitter as breaching the platform’s glorification of violence policy.  Are we indeed living in a post-truth world?

Facebooktwitterredditpinterestlinkedinmailby feather

The annotations of the Official Code of Georgia Annotated are not protected by copyright.

On April 27, 2020, the Supreme Court affirmed the judgment of the Court of Appeals for the 11th Circuit which had held that the annotations of the Official Code of Georgia Annotated (OCGA) are not protected by copyright.

The OCGA is the official code of the State of Georgia and includes the Georgia statutes in force and non-binding supplementary material, including annotations, such as summary of judiciary decisions or a list of relevant legal articles.

The Code Revision Commission assembles the OCGA, and is composed of the Lieutenant Governor of Georgia, four Georgia senators, four Georgia representatives, the Speaker of the House, and five members appointed by the state of Georgia.

However, it is a private organization, Matthew Bender & Co., Inc., a division of the LexisNexis Group, which prepares the annotations under a work-for-hire agreement. As such, Georgia is the copyright owner of the annotations.

While the annotated version of the Georgia Code is made available for free on the LexisNexis site, the OCGA is protected by a paywall. Public.Resource.Org (PRO) is a non-profit organization aiming to facilitate public access to government records and legal materials. It scanned a copy of the paper version of the OCGA and posted it on its freely accessible site.

The Commission sent PRO several letters asking for the copy of the OCGA to be taken down from its site. As the non-profit refused to do so, the Commission filed a copyright infringement suit.

The Northern District Court of Georgia held that the annotations were protected by copyright as they are not “enacted into law.” The US Court of Appeals for the 11th Circuit reversed, reasoning that it is not the Commission which is the author of the OCGA, but rather, the People, while judges are legislators are only “draftsman… exercising delegated authority.”

The Supreme Court affirmed, but Chief Justice Roberts, who delivered the opinion of the court, followed a different reasoning than the 11th Circuit judges.

The Court cited the government edicts doctrine, under which judges cannot be the authors of the works they produce in the course of their official duties as judges and legislators. This doctrine was developed by three 19th century cases, Wheaton v. Peters (1834), holding that a court reporter cannot have a copyright in the Court’s opinions, Banks v. Manchester (1888), holding that “the judge who, in his judicial capacity, prepares the opinion or decision, the statement of the case and the syllabus or head note”  cannot be the author of the work and thus cannot hold its copyright, and finally Callaghan v. Myers (1888), holding again that a court reporter cannot have a copyright in the Court’s opinions.

For the Court, “[i]f judges, acting as judges, cannot be “authors” because of their authority to make and interpret the law, it follows that legislators, acting as legislators, cannot be either.”

The Commission can be qualified as a legislator, as it “functions as an arm of [the Georgia Legislature] for the purpose of producing the annotation…[and] serves as an extension of the Georgia Legislature in preparing and publishing the annotations.”  The Commission creates the annotation in the discharge of its legislative duties, in its role as an adjunct to the Legislature.

Justice Ginsburg dissented, taking the view that “the annotations are not created contemporaneously with the statures to which they pertain,” that “are descriptive rather than prescriptive,” and that they are only “given of the purpose of convenient reference.” Justice Sotomayor, who joined the opinion of the Court, noted during the debates that dictum is not law either.

The State of Georgia had argued that §101 of the Copyright Act lists “annotations” as works eligible for copyright protection. This argument did not convince the Court, as §101 does not state that annotations prepared by a judge or legislators can be protected.

A amicus curia brief filed by the American Library Association argued that “[t]he LexisNexis unannotated code fails to provide meaningful citizen access to Georgia law.“ It further noted that the online code can only be accessed after agreeing to a clickwrap, which states  that  the State of Georgia “reserves the right to claim and de-fend the copyright in any copyrightable portions of the site,” but that once on the site, the user is not able to understand which part of the Code is protected by copyright or not. The brief also noted that such a scheme “undermin[es] libraries’ longstanding commitment to patron privacy.”

Georgia had argued that, if these annotations are not protectable, it would not be able to induce private organizations, such as LexisNexis, to preparer such work. Chief Justice Roberts wrote “[t]hat appeal to copyright policy, however, is addressed to the wrong forum,as it is Congress, not the courts, which must decide the best ways to pursue the objectives of the Copyright Clause of the US Constitution.

The issue of public access to law is an important issue, and it remains to be seen if this case will advance or hinder it.

 

 

 

 

Facebooktwitterredditpinterestlinkedinmailby feather

New York’s SHIELD Act Updates New York Data Breach Law

New York’s SHIELD Act (“Stop Hacks and Improve Electronic Data Security” Act) is an amendment to New York’s data breach notification law. It took effect on October 23, 2019, but its data security provisions, codified in N.Y. Gen. Bus. Law § 899-bb, only took effect on March 21, 2020.

First let’s have a look at the provisions which took effect in 2019.

The Act greatly expanded the territorial scope of New York data breach law as it no longer covers only a person or company conducting business in New York state. If the breach affects a New York resident, then the law applies.

The definition of “private information” is expanded to include an account number, credit or debit card number, if they can be used to access an individual’s financial account without additional identifying information, security code, access code, or password. It also now includes a username or e-mail address combined with a password or security question and answer allowing to access an online account.

Private information” now includes biometric information, which is defined as “data generated by electronic measurements of an individual’s unique physical characteristics.”

They are:

  • a fingerprint,
  • a voice print,
  • a retina or iris image,
  • any other unique physical representation or digital representation of biometric data which are used to authenticate or ascertain the individual’s identity.

What is a security breach?

The definition of a security breach was expanded to include unauthorized access to computerized data that compromises the security, confidentiality, or integrity of private information retained by a company. Before the Shield Act, the law only covered unauthorized acquisition of personal data maintained by a business.

The SHIELD Act specifies that a company may consider several factors to determine whether information has been accessed or is reasonably believed to have been accessed without authorization, among them indications that the information was viewed, communicated with, used, or altered by a person without valid authorization or by an unauthorized person.

The notice of the breach

The notice cannot be provided to the affected person by email If the breached information includes an e-mail address in combination with a password or security question and answer allowing access to a online account.

In that case, a clear and conspicuous notice must be “delivered to the consumer online when the consumer is connected to the online account from an internet protocol address or from an online location which the person or business knows the consumer customarily uses to access the online account.”

The notice must now include:

  • the telephone numbers and websites of the relevant state and federal agencies providing information regarding security breach, response and identity theft prevention and protection information.

If the individual or company is required to provide notice of the breach under the Gramm-Leach-Bliley Act (GLBA) , the Health Insurance Portability and Accountability Act (HIPAA), or to the New York Department of Financial Services, an additional notice to the affected individuals is not required, but must still be provided to the New York Attorney General, the department of State, the division of State Police, as well as to the consumer reporting agencies (CRAs).

A HIPAA covered entity must report a breach to the New York Attorney General if the notification of the data breach to the Secretary of Health and Human Series is required by HIPAA, even if the breach includes breach of information which is not “private information.”

If the breach affects any New York residents, the person or business must provide a copy of the template of the notice to the state attorney general, the department of state and the division of state police, and this notice must not delay sending the notice to the affected persons.

It is not required, however, to notify an affected person of a breach if:

  • the persons authorized to access the private information inadvertently disclosed it and
    • the person or business reasonably determined that misuse of the information will not likely occur, or
    • financial harm to the affected persons will not likely occur or
    • emotional harm will not likely occur, if a username or e-mail address in combination with a password or security question and answer that would permit access to an online account was disclosed.

Such a determination must be documented in writing and maintained for at least five years. If the incident affects over 500 New York residents, the person or business must provide the written determination to the state attorney general within ten days after the determination.

In practice, this means that a written document must be created to explain why it has been decided not to notify affected individuals of a breach. This document must be written with the utmost care, and should explain the circumstances of the breach, how it was discovered, what information had been breached how the risk was assessed and why the decision not to report the breach has been taken.

The SHIELD Act does not provide a private right of action, but the remedies provided are “in addition to any other lawful remedy available.” The penalties for failure to notify increased, however, from ten to twenty dollars per instance of failed notification.

The statute of limitations is expanded from 2 to 3 years from the date the attorney general became aware of the violation or the date the notice was sent.

The Data Security Provisions of NY SHIELD ACT, which took effect on March 21, 2020

Any person or business which owns or licenses computerized data which includes private information of a New York resident must develop, implement, and maintain a data security program.

In order to be compliant with the NY Shield Act, such a person or business must put in place administrative, technical, and physical safeguards.

A  small business, which is defined as one having less than 50 employees, less than 3 million dollars in gross annual revenue in each of the last 3 fiscal years or less than 5 million dollars in year-end total assets is not exempt from this requirement. However, it is sufficient that the security program contains reasonable administrative, technical and physical safeguards which are “appropriate for [its] size and complexity.”

The administrative safeguards include:

  • Designating one or more employees to coordinate the security program
  • Identifying reasonably foreseeable internal and external risks
  • Assessing whether sufficient safeguards have been put in place
  • Training and managings employees in the security program practices and procedures
  • Selecting service providers which can maintain appropriate safeguards, and requiring those safeguards by contract

The technical safeguards include:

  • Assessing risks in network and software design
  • Assessing risks in information processing, transmission, and storage
  • Detecting, preventing, and responding to attacks or systems failures
  • Regularly testing and monitoring the effectiveness of key controls, systems, and procedures

The physical safeguards include:

  • Assessing risks of information storage and disposal
  • Detecting, preventing, and responding to intrusions
  • Protecting against unauthorized access to or use of private information during or after the collection, transportation and destruction or disposal of the information
  • Disposing of private information within a reasonable amount of time after it is no longer needed for business purposes by erasing electronic documents in such a way that the information can no longer be read or reconstructed
Facebooktwitterredditpinterestlinkedinmailby feather