Can a school discipline students for any off-campus speech, including speech on social media?

The Supreme Court of the United States heard arguments in April in the Mahanoy Area School District v. B.L. case (audio is here).

The Mahanoy case is about a 14-year-old high school cheerleader, who failed to make the cheerleading varsity teams of her school and vented her frustration on social media in no uncertain terms. She posted on Snapchat a photo of her and a friend both “flipping the bird” with the text, “fuck school fuck softball fuck cheer fuck everything” superimposed on the photo. The snap post was made on a weekend, the picture had not been taken on school’s premises, and did not mention or name the high school.

The cheerleading squad to which the teenager belonged had the following for policy:

Please have respect for your school, coaches, teachers, and other cheerleaders and teams. Remember, you are representing your school when at games, fundraisers, and other events. Good sportsmanship will be enforced, this includes foul language and inappropriate gestures…. There will be no toleration of any negative information regarding cheerleading, cheerleaders, or coaches placed on the internet.

The teenager was suspended from the squad for a year because she had been “disrespectful” when posting the Snap message.

She sued the school district claiming a violation of her First Amendment rights. The United States District Court for the Middle District of Pennsylvania granted summary judgment in her favor, ruling that the school had violated her First Amendment rights. The Third Circuit agreed and affirmed, holding that Tinker does not apply to off-campus speech.

The School District then filed a certiorari petition to the Supreme Court, which was granted. This is an important case as the Court will decide whether a school can discipline students for any off-campus speech, including speech on social media.

The Tinker 1969 case: should it apply today for social media speech?

The question presented to the Supreme Court is whether its “Tinker v. Des Moines Independent Community School District case, which holds that public school officials may regulate speech that would materially and substantially disrupt the work and discipline of the school, applies to student speech that occurs off campus.”

Indeed, the Court held in is 1969 Tinker case that “[s]chool officials do not possess absolute authority over their students.” For the Court, students or teachers did not “shed their constitutional rights to freedom of speech or expression at the schoolhouse gate.”

However, Tinker provided a narrow exception that student conduct, “in class or out of it, which for any reason — whether it stems from time, place, or type of behavior — materially disrupts classwork or involves substantial disorder or invasion of the rights of others” is not protected by the First Amendment.

In Tinker, petitioners were teenagers who had planned to wear black armbands at school to protest the Vietnam war. The school had then adopted a policy banning wearing armbands at school. Petitioners had nevertheless worn armbands and had been suspended. The students who gave their name to this famous case, Mary Beth Tinker and John Tinker, filed an amici curiae brief in the Mahanoy case in support of Respondents (the teenager and her parents).

Law professors Bambauer, Bhagwat and Volokh’s Amici Curiae Brief

Law professors Bambauer, Bhagwat and Volokh filed an amici curiae brief in support of Respondents, explaining that they were concerned that extending Tinkerto all off-campus student speech… expected to reach campus… would empower schools to punish students for expressing unpopular or controversial viewpoints.”

They argued that, while student speech outside school activities should generally be fully protected by the First Amendment, the “schoolhouse gate” is no longer only physical, but is also virtual. This argument is acutely understood in 2021 after more than a year where students were taught online, attending classes from home because of the pandemic.

As all students‘s online speech is expected to reach campus, expanding Tinker to online speech “would be an enormous expansion of schools’ power to censor the speech of their students.” They propose to apply Tinker to student speech outside the school only in certain cases, such as when students say “cruel personal things to or about each other, especially online,” that is, bullying or harassment. Disruptive  speech in a school-sponsored forum could be disciplined as well.

Arguments of the Petitioners

During the arguments, Petitioners’ counsel mentioned that for state law or policies in all fifty states, including statutory law in twenty-six states , “the standard for bullying is severe, persistent harassment that interferes, actually prevents that child from getting an education.” For instance, New York law defines “harassment” and “bullying” as “creation of a hostile environment by conduct or by threats, intimidation or abuse, including cyberbullying,” that would, inter alia, “unreasonably and substantially interfere with a student’s educational performance, opportunities or benefits, or mental, emotional or physical well-being” or “reasonably cause or would reasonably be expected to cause a student to fear for his or her physical safety.”

Petitioners argue schools can regulate off campus speech if it is directed at the school and the audience is the school.

During the oral argument, Justice Thomas asked Petitioners’ counsel whether the place where a student posts a social media post, at school or at home, does make a difference for applying Tinker. Counsel stated that it does not make a difference.

Justice Breyer said that the teenager’s speech, while inelegant, had not caused a material and substantial disruption, and quipped that, if swearing off campus did cause such disruption, “every school in the country would be doing nothing but punishing.”

Justice Alito said that he had no idea of the meaning of speech “targeting the school.” Justice Sotomayor asked Petitioner’s attorney whether a student could be punished for cursing at her home or on the way to school (no was the answer). Justice Sotomayor added that she had been told by her clerks “among certain populations, a certain large percentage of the population, how much you curse is a badge of honor,” asking how the line would be drawn if such language would target the school.

Arguments of the Respondents  

Counsel for Respondents argued “expanding Tinker would transform a limited exception into a 24/7 rule that would upend the First Amendment’s bedrock principle and would require students to effectively carry the schoolhouse on their backs in terms of speech rights everywhere they go.” If that would be the case, that “kids won’t have free speech, period.”

He further argued that “the blunt instrument of Tinker” is not needed to address off-campus behavior issues because the First Amendment allows regulating threats, bullying, harassment, and cheating, if the response to these issues is consistent with the First Amendment.

If the speaker is “under the supervision of the school” the school can prevent the student to swear or even publish an article about teen pregnancy, but if the student is at home, the school cannot regulate the student’s speech. Counsel for Respondent further argued that Tinker means that the school can silence a speaker if the speech is going to lead to disruption. However, Tinker cannot be applied to out-of-school speech just because listeners in school might be disrupted by this speech.

Justice Sotomayor asked him whether a school would be powerless to regulate out-of-school speech directed at a student being told “you are so ugly, why are you even alive.” Such speech is not a true threat, it is not harassment if they are just speaking. Counsel for Respondents answered that the school could regulate that conduct “if it satisfies a First Amendment, permissible definition of bullying,” adding that they thought that permissible definition of bullying, under the First Amendment could be “severe or pervasive interpersonal aggression that interferes with access to education.” However, if a school code would prevent any speech that would cause emotional harm, then it would not be consistent with the First Amendment. Judge Sotomayor was not convinced, stating that saying “You are so ugly, why are you around” is not aggressive.

Both Justice Kagan and Justice Barrett asked about regulating cheating, Justice Kagan giving as example  a student emailing answers to geometry homework from home. Counsel for Respondents answered that cheating is conduct, not speech , and thus could be regulated, as Giboney v. Empire Storage and Ice Co. allows prohibiting speech integral to prohibited conduct (this was a case about picket lines).

The case is important as the Court will decide whether a student can be disciplined for any off-campus speech, including speech on social media. It is doubtful that the Court will hold that schools have such rights, as doing so would allow schools to have a right to punish students for expressing their opinions, as long as the intended public is the school. A decision should be announced soon.

 

Image courtesy of Flickr user Pete, Public Domain.

Facebooktwitterredditpinterestlinkedinmailby feather

New Research Project Comparing How EU Digital Service Act and Facebook Oversight Board Empower Users

My new research project as a Fellow of the Transatlanctic Technology Law Forum is Comparing the powers given to users by the EU Digital Service Act and by the Facebook Oversight Board: at the service of users or overseeing them?

Abstract:
The European Union Commission published on December 15, 2020, its Proposal for a Regulation on a Single Market For Digital Services (Digital Services Act, “DSA”). This new horizontal framework aims at being a lex generalist which will be without prejudice to the e-Commerce Directive, adopted in 2003, and to the Audiovisual Media Services Directive, revised in 2018.

This new Regulation is likely to dramatically change how online platforms, particularly social media platforms, moderate content posted by their users. Very large platforms, defined by the DSA as ones providing monthly services on average to 45 million active users in the European Union, will face heightened responsibilities, such as assessing and mitigating risks of disseminating illegal content. Whether this heightened scrutiny may lead illegal speech to migrate to smaller platforms remains to be observed.

Each Member State will design its Digital Service Coordinator, a primary national authority ensuring the consistent application of the DSA. Article 47 of the DSA would establish the European Board for Digital Services, which will independently advise the Digital Services Coordinators on supervising intermediary services providers. A few months earlier, the creation of Facebook’s Oversight Board (“the Oversight Board”) was met with skepticism, but also hope and great interest. Dubbed by some as “Facebook’s Supreme Court,” its nature is intriguing, appearing to be both a corporate board of the U.S. public company Facebook and a private court of law, with powers to regulate the freedom of expression of Facebook and Instagram users around the world. This independent body has for its main mission to issue recommendations on Facebook’s content policies and to decide whether Facebook may, or may not, keep or remove content published on its two platforms, Facebook and Instagram. The Oversight Board’s decisions are binding on Facebook, unless implementing them violates the law.

The law of the States remains thus the ultimate arbiter of the illegality of content. However, such legal corpus is far from being homogeneous. Both the European Court of Justice and the European Court of Human Rights have addressed the issue of illegal content. In the European Union, several instruments, such as the revised Audiovisual Media Services Directive and the upcoming  Regulation on preventing the dissemination of terrorist content online, address harmful, illegal content. Member States have each their own legal definitions. France is currently anticipating the DSA by amending its June 21, 2004 Law for Confidence in the Digital Economy. If the French bill becomes law, platforms will have to make public the resources devoted to the fight against illegal content, and will have to implement procedures, as well as human and technological resources, to inform judicial or administrative authorities, as soon as possible, of actions taken following an injunction by these courts or authorities.

The DSA does provide a definition of illegal content but aims at harmonizing due diligence obligations of the platforms, and how they address the issue. For instance, article 12 of the DSA is likely to influence how platforms’ terms and conditions are written, as it directs platforms to inform users publicly, clearly, and unambiguously, about the platform’s content moderation policies and procedures, including the roles played by algorithms and by human review.

This requirement is particularly welcome as the use of algorithms to control speech and amplify messages currently suffers from a lack of transparency, and has been credibly accused of fostering bias: is algorithm bias illegal content? The Oversight Board will only examine a few posts among the billions posted each year, and the cases submitted for approval may be the proverbial trees hiding in the forest: the use of algorithms to control speech by amplifying messages likely to trigger reactions, clicks, likes, often monetizing hate and preventing minority speech to be heard. As freedom of expression is not only the right to impart, but also to receive information, are algorithms, which are striving to keep us in well-delimited echo chambers, the ultimate violation of the freedom of expression?

Users are provided a way to have a say about what should be considered illegal online speech, and both the DSA and the Oversight Board aim at providing users more power. The DSA would provide users a user-friendly complaint and redress mechanism and the right to be provided an explanation if content they have posted is taken down by a platform. The Oversight Board is using a crowdsourcing scheme allowing users all around the world to provide their opinion of a particular case before it is decided.

International users will also be provided the power to define and shape illegal content. Is this a new form of private due process? The DSA would provide several new rights to users: the right to send a notice (article 14  on “Notice and action mechanisms”); the right to complain (article 17 on “Internal complaint-handling system”); the right to judicial redress (article 18 on “Out-of-court dispute settlement”); a special right to flag, if considered particularly trustworthy (article 19 on “Trusted Flaggers” whose opinion will be given more weight than common users); the right to be provided an explanation (article 15 of the Digital Service Act on “Statement of reasons”).

The Oversight Board also gives powers to users: they have the power to appeal a decision made by Facebook about content posted on its two platforms and are also provided the opportunity to share their opinions, knowledge, or expertise in each case. Users are invited to share their explanation of local idioms and standards, shedding further light on what content should be considered illegal. As such, they are given an easy way to file an “amicus brief”‘ comment.

Surprisingly, both the DSA and the Oversight Board appear to have the fundamental rights of users as ultimate standards in their mission. As stated by the DSA Explanatory Memorandum, the DSA “seeks to improve users’ safety online across the entire Union and improve the protection of their fundamental rights.” More curiously, the Oversight Board has shed new light on international human rights. The Oversight Board’s first decisions, issued on January 28, 2021, were not based on U.S. laws or on the private rules of the Facebook platform, but on international human rights laws. As such, it can be argued that their decisions were “based on European values – including the respect of human rights, freedom, democracy, equality and the rule of law,” the same values which are the foundations of the DSA. This research paper will analyze the decisions of the Oversight Board during its first year, including how the cases were chosen, where the speaker of the speech at stake lives, and the legal issues raised by the cases.

However, assessing the legality of content in accordance with the law is still the prerogative of the Courts. The DSA provides them the right to be informed by the platforms on the effect of their order to act against a specific illegal content (article 8). This power is, however, also provided by the DSA to administrative authorities. Will the administrative judge soon have more power than the judiciary to protect freedom of expression? The DSA may lead human rights to become a mere part of corporate compliance, overseen by administrative authorities, as is, arguably, already the case with the right to privacy under the GDPR. Would tortious conduct and civil responsibility be more adequate to protect the rights of users?

Regardless of their differences, the DSA and the work of the Oversight Board may have a  common goal, transparency. The DSA aims at setting a higher standard of transparency and accountability, as its article 13 on “Transparency reporting obligations for providers of intermediary services” directs platforms to publish, at least once a year, a clear and comprehensible report on their content moderation practices. Very large online platforms will have to perform external risk auditing and public accountability. The Oversight Board is committed to publicly sharing written statements about its decisions and rationale.

This research paper will compare the place and power provided to victims of illegal speech by the Oversight Board’s caselaw and recommendations, with the place and power provided to victims of illegal speech by the platforms in their DSA compliance practice. It will also examine how the Oversight Board’s decisions have been implemented by Facebook. Are the future European Union Digital Service Act and the corporate Oversight Board at the service of users or they overseeing them?

Facebooktwitterredditpinterestlinkedinmailby feather

New Article: Regulating Freedom of Speech on Social Media: Comparing the EU and the US Approach

My article, Regulating Freedom of Speech on Social Media: Comparing the EU and the US Approach, was recently published by Stanford Law School. It is the second TTLF Working Papers I have published.

 

Here is the abstract of the article :

Social media platforms provide forums to share ideas, jokes, images, insults, and threats. These private companies form a contract with their users who agree in turn to respect the platform’s private rules, which evolve regularly and organically, reacting sometimes to a particular event, just as legislatures may do.

As these platforms have a global reach, yet are, for the most part, located in the United States, the articulation between the platforms’ terms of use and the laws of the states where the users are located varies greatly from country to country.

This article proposes to explore the often-tense relationships between the states, the platforms, and the users, whether their speech creates harm or they are a victim of such harm.

The first part of the article is a general presentation of freedom of expression law. This part does not attempt to be a comprehensive catalog of such laws around the world and is only a general presentation of the U.S. and the European Union laws protecting freedom of expression, using France as an example of a particular country in the European Union. While the principle is freedom of speech, the legal standard is set by international conventions, such as the United Nations Universal Declaration of Human Rights or the European Convention on Human Rights.

The second part of the article presents what the author believes to be the four main justifications for regulating free speech: protecting the public order, protecting the reputation of others, protecting morality, and advancing knowledge and truth. The protection of public order entails the protection of the flag or the king, and lèse- majesté sometimes survives even in a Republic. The safety of the economic market, which may dangerously sway if false information floats online, is another state concern, as is the personal safety of the public. Speech sometimes does harm, even kill, or place an individual in fear for her or his life. The reputation and honor of others is easily smeared on social media, whether by defamation, insults or hate speech, a category of speech not clearly defined by law, which yet is at the center of the debate on online content moderation, including whether there is a right to speak anonymously online. What is “morality” is another puzzling question, as blasphemy, indecency, even pornography, have different legal definitions around the world and private definitions by the platforms. Even truth is an elusive concept, and both states and platforms struggle to define what is “fake news,” and whether what is clearly false information, such as denying the existence of the Shoah, should be allowed to be published online. Indeed, while four justifications for regulating speech are delineated in this article, the speech and conduct which should be considered an attack on values worthy to be protected is not equally considered by the different states and the different platforms, and how the barriers to speech are being placed provides a telling picture of the state of democracy.

The third part examines who should have the power to delete speech on social media. States may exert censorship on the platforms or even on the pipes to block access to speech and punish, sometimes harshly, speakers daring to trespass the barriers to free speech erected by the states. For the sake of democracy, the integrity of the electoral process must not be threatened by false information, whether it spreads false information about the candidates or false information about alleged fraud, or even false information about the result of the vote.

Social media platforms must respect the law. In the United States, Section 230 of the Communications Decency Act of 1996 provides immunity to platforms for third-party content, but also for screening offensive content. Section 230 has been modified several times and many bills, from both sides of the political spectrum, aim at further reform. In the European Union, the E-commerce Directive similarly provides a safe harbor to social media platforms, but the law is likely to change soon, as the Digital Services Act proposal was published in December 2020. The platforms have their own rules, and may even soon have their own private courts, for example the recently created Facebook Oversight Board. However, other private actors may have a say on what can be published on social media, for instance employers or the governing bodies of regulated professions, such as judges or politicians. Even private users may censor the right of others to speak freely, using copyright laws, or may use public shaming to fear speakers into silence. Such fear may lead users to self-censor their speech, to the detriment of the marketplace of ideas, or they may choose to delete controversial messages. Public figures, however, may not have the right to delete social media posts or to block users.

The article was finished the last days of 2020, a year which saw attempts to use social media platforms to sway the U.S. elections by spreading false information, the semi- failed attempt of France to pass a law protecting social media users against hate speech, and false news about the deadly Covid-19 virus spreading online like wildfire, through malicious or naïve posts. A few days after the article was completed, the U.S. Capitol was attacked, on January 6, 2021, by a seditious mob seeking to overturn the results of the Presidential election, believing that the election had been rigged, a false information amplified by thousands of users on social media, including the then President of the United States. Several social media platforms responded by blocking the President’s social media accounts, either temporarily or permanently, as did Twitter.

 

 

Facebooktwitterredditpinterestlinkedinmailby feather

New York State May Soon Protect Child Influencers

A bill recently introduced in the New York Assembly aims at broadening the scope of the laws governing child performers to include “children who participate in online videos that generate earnings.”

This would allow children performing in “influencer videos” to be considered child performers and thus be able to benefit from the protection of the New York law covering child performers, particularly N.Y. Lab. Law § 150 – 154A on employment and education of child performers.

The justification of the bill argues that:

The internet has created a world where anyone with a smartphone can become a producer of content and parents can easily upload videos of their children and instantly create an internet star. 

Unfortunately, these young online actors lack the protections granted to children working in film and television. Increasingly more parents and children are becoming content producers each day and no regulations are in place to protect children against exploitation.”

Expansion of what is “artistic or creative services” under the law

The bill proposes to modify N.Y. Lab. Law § 150(1) – Definitions to include “influencer” in the list of what is “artistic or creative services” under the law. Such services would include:

participation in a video that is posted to a video-sharing social networking internet website which generates earnings from sponsors or by other means, based on the number of views of such video, based on the number of clicks on a link leading to such video.”

A child influencer would be a child performer

The bill would change the definition of a “child performer” under N.Y. Lab. Law § 150 to include a child who:

agrees to render artistic or creative services [as defined by section 150 of the labor code law] where such artistic or creative service were recorded in the state of New York or uploaded to a sharing and/or social networking internet website from within the state of New York.”

The bill would change the definition of a “child performer’s employer ” under N.Y. Lab. Law § 150 to include persons employing a “child performer” to furnish “artistic or creative services,” and “video-sharing and/or social networking internet website[s] that generat[e] earnings from videos qualifying as artistic or creative services by a child performer.”

New York Child Performer Law

“Child influencers” would thus be considered by New York law as child performers and would thus benefit from the protection of the law for child performers.

New York child performer law protects the interests of child performers. Under Article 7 of the New York State Estates, Powers, and Trusts Law (N.Y. Est. Powers and Trusts Law § 7-7.1 – Child performer trust account), the custodian and guardian of a child performer must establish a child performer account withing fifteen days of the commencement of employment, if such account has not been previously established.

Such accounts are sometimes referred to “Coogan Accounts,” after Jackie Coogan from The Kid fame, whose fate, losing the millions of dollars earned as a child performer through parental mismanagement, led to the passing of the California Child Actor’s Bill, the “Coogan Act,” which inspired the New York law.

Under New York law, the custodian of the trust account must then “promptly” notify the child performer’s employer of the existence of the account. The employer must transfer fifteen percent of gross earnings  to this account “within thirty days following the final day of employment.” The parent, legal guardian or custodian can require than more than fifteen percent of the gross earnings should be transferred to the trust account. If the balance of the account reaches two hundred fifty thousand dollars or more, a trust company must be appointed as custodian of the account.

Such thresholds may be easily reached for some child influencers with a high earning power: two minors toped Forbes’ list of Highest-Paid YouTube Stars of 2019, Anastasia Radzinskaya placing number 3 with an estimated $18 million earnings and Ryan Kaji taking first place with an estimated earnings of $26 million. Ryan Kaji became popular on YouTube and can be seen reviewing toys and “unboxing” toys. He earned $29.5 million in 2020, through ad revenue alone, but earned much more, $200 million, through derivative product deals, such as toys and clothes bearing his name.

This issue has already been addressed in France, and the New York bill is likely to inspire similar state bills.

 

Facebooktwitterredditpinterestlinkedinmailby feather

Works Posted on Public Instagram Accounts Can be Used by Third Parties Without Infringing Copyright

A recent decision from the Southern District of New York (SDNY) should give pause to artists promoting their works by posting them on their public social media accounts.

Mashable had offered professional photographer Stephanie Sinclair 50 dollars for licensing rights for one of her photographs, to illustrate an article on female photojournalists featuring her and several others.

Ms. Sinclair did not accept the offer, and Mashable then embedded the photo that Plaintiff had previously published on her public Instagram account to illustrate the article. Ms. Sinclair asked Mashable to take down the picture. It refused and Ms. Sinclair then filed a copyright infringement suit.

Mashable moved to dismiss, claiming that, by posting the photograph on her public Instagram account, Plaintiff had granted Instagram the right to license the photograph, and that, it turn, Instagram had granted Mashable a valid sublicense.

United States District Judge Kimba M. Wood from the Southern District of New York (SDNY) granted the motion to dismiss, finding that Mashable had used the photograph pursuant to a valid sublicense from Instagram, which Terms of Use state that users grant Instagram “a non-exclusive, fully paid and royalty-free, transferable, sub-licensable, worldwide license to the [c]ontent [posted] on or through [Instagram],” if the content is posted on a public, not a private account.

An image published on a website can be “embedded” in another page if a specific “embed” code has been added to the HTML instructions incorporating the image, which directs the browser to the third-party server to retrieve the image. The embedded image then is hyperlinked to the third-party website.

In our case, Plaintiff’s photograph appeared embedded on the Mashable website page, but was not hosted on Mashable’s website server. It was hosted on Instagram’s server, after Plaintiff had posted it on her public account.

In a similar case, Goldman v. Breitbart, the SDNY had rejected in 2018 the “server’s test“ laid out by the Ninth Circuit in Perfect 10, Inc. v. Amazon. com, Inc., which was a hyperlinking case, not an image embedded case. Under this test, images appearing on a site using frames and in-line links are not a display of the full-size images stored on and served by another website.

While noting that in-line linking works, like embedding, is based on HTML code instructions, United States District Judge Katherine B. Forrest had not applied the server test in Goldman, finding it inappropriate to the specific facts of this case and also “[not] adequately grounded in the text of the Copyright Act.” She found that using an embedded tweet reproducing a photograph protected by copyright to illustrate an article was infringing.

In Goldman, the author had not originally published the photograph on Twitter. Instead, he had uploaded it in his Snapchat Story. It became viral and was published on Twitter by a third party. Judge Wood explained in a footnote that, as Instagram had granted Mashable a valid license to display the work, the court did not need to address the issue of whether embedding an image is an infringing display of the work.

Take away: by posting a work protected by copyright on a public Instagram account an author automatically transfers rights to the social media site, including the right to sub-license a worldwide license to use it. The author not wishing to transfer these rights is only left with the option of making his or her account private.

Facebooktwitterredditpinterestlinkedinmailby feather

The FTC Settles with Tea Company over Failure to Adequately Disclose Connection with Influencers

The Federal Trade Commission (FTC) announced on March 6 it has settled a lawsuit with the Florida-based company Teami LLC (Teami) over its claims that drinking its teas has positive health effects, including rapid weight loss, and over its use of influencers for marketing purposes without adequately disclosing the connection.

According to the March 5, 2020 complaint, Teami paid hundreds of influencers between June and late-October 2018 to endorse Teami products on Instagram, by a post or a video.

 

The influencers who allegedly had not made adequate disclosures were not named as defendants, but each received a warning letter from the FTC asking them to describe the actions they are or will be taking “to ensure that [their] social media posts endorsing brands and businesses with which [they] have a material connection clearly and conspicuously disclose [their] relationships.”

According to the complaint, the FTC had first contacted the defendants, Teami’s co-founders, in April 2018 to inform them that any material connections to any endorsers had to be disclosed clearly and conspicuously by using unambiguous language in a way that consumers can easily notice the disclosure without having to look for it.

Teami then implemented a social media endorsement policy, informed influencers about it, and provided guidance on how to disclose their relationship with Teami in their sponsored posts.

However, some of the video endorsements did not disclose the connection between Teami and the endorser within the video itself (see the Cardi B video here) while others did not include a disclosure in the first two or three lines of the accompanying text without having to click on “more.” Similarly, some Instagram posts only displayed the connection, when viewed in a follower’s feed, if the followers clicked on “more.”

As such, the FTC claimed that these sponsored social media posts were misrepresentations or deceptive omissions of material fact which are deceptive acts or practices prohibited by Section 5(a) of the FTC Act.

Under the proposed court order settling the FTC’s complaint Defendants and their agents are permanently restrained and enjoined “from making, or assisting others in making, any misrepresentation, expressly or by implication, about the status of any endorser or person providing a review of the product, including a misrepresentation that the endorser or reviewer is an independent or ordinary user of a product.”

Takeaway: Disclosures about the material connection must be clear and conspicuous

As explained in the FTC Guides Concerning the Use to Endorsements and Testimonials in Advertising, a “material connection” between the endorser and the marketer of a product must be clearly and conspicuously disclosed, unless the connection is already clear from the context of the communication containing the endorsement.

A material connection may be a business, family, or personal relationship. The proposed court order defines a clear and conspicuous disclosure as one “difficult to miss (i.e., easily noticeable) and easily understandable by ordinary consumers.

If the communication is solely aural (say, a podcast) or solely visual (such as an Instagram post), then “the disclosure must be made through the same means through which the communication is presented.”

If the communication is both aural and visual, such as a video, then “the disclosure must be presented simultaneously in both the visual and audible portions of the communication even if the representation requiring the disclosure is made in only one means.”

This means that if the influencer endorses product X only by wearing or showing it in his video, the disclosure must also be spoken, and if the influencer only endorses product or service Y in her video, without showing it, the endorsement must also be written.

A visual disclosure “must stand out from any accompanying text or other visual elements so that it is easily noticed, read, and understood.”

An audible disclosure “must be delivered in a volume, speed, and cadence sufficient for ordinary consumers to easily hear and understand it.

Any disclosure “must be unavoidable.” It “must use diction and syntax understandable to ordinary consumers and must appear in each language in which the representation that requires the disclosure appears.” So if the influencer makes her endorsement in several languages, the disclosure must be in all of these languages.

Takeaway: Have a social media endorsement policy AND enforce it

The FTC complaint alleged that several social media posts sponsored by Teami did not follow Teami’s social media policy.

This shows that having a social media endorsement policy, but not enforcing it, is like having no policy at all. Marketers should train their employees and direct several employees to monitor social media postings to ensure that they are compliant.

Facebooktwitterredditpinterestlinkedinmailby feather

Parody Social Media Accounts & the Computer Fraud and Abuse Act

2291127824_087a497beaThe U.S. District Court of Oregon dismissed last month a suit brought by a middle school assistant principal claiming that, by creating fake social media accounts using his name and likeness, some students had violated the Computer Fraud and Abuse Act (CFAA).

Plaintiff Adam Matot is assistant principal of a middle school in Oregon. He claimed that two or more students there created fake social media accounts on Facebook and Twitter using Plaintiff’s name and likeness. The Facebook account had some seventy “friends,” which were allegedly able to see obscene pictures and read comments defamatory to the Plaintiff when accessing the fake account.

Plaintiff filed a first complaint last January, alleging computer fraud and abuse under the CFAA, 18 U.S.C. § 1030, and defamation.

He later filed an amended complaint, also naming the parents of one of the minors as defendants and claiming that they had negligently failed to supervise their child.

One of the parents filed a motion to dismiss the case for lack of subject matter jurisdiction which was granted by the District Court of Oregon on September 26.

Creating Parody Social Media Accounts Not a Violation of the Computer Fraud and Abuse Act

The CFAA is a federal law which prohibits access to a computer without authorization or exceeding authorization, with intent to defraud while using the computer.

§ 1030(a)(6) defines exceeding authorized access as “to access a computer with authorization and to use such access to obtain or alter information in the computer that the accesser is not entitled so to obtain or alter.”

Congress enacted this law in 1984 to fight computer hackers, yet some plaintiffs over the years have been trying to expand the scope of the law to support various claims. However, interpreting broadly a criminal statute is dangerous for civil liberties. Just think of it: a teenager playing games past his curfew is accessing the family computer without authorization, but should he be tried in a federal court for that offense?

In our case, Plaintiff claimed that, by violating the Terms of Service of the social networking site, defendants used their services without authorization and exceeded their authorization and thus violated the CFAA. The computers allegedly abused were those operating social media sites.

But the Oregon District Court was not convinced by the arguments, citing a 2012 Ninth Circuit case, U.S. v. Nosal, about a breach of company policy forbidding disclosure of confidential information. Chief Judge Kozinski, who wrote the opinion, noted that interpreting the CFAA broadly would make shopping or playing games on company’s time, using the employer’s computer, a federal crime. The Ninth Circuit held that exceeding authorized access within the meaning of the CFAA “does not extend to violations of use restrictions” (Nosal at 863).

Bullying

In the United States v. Drew case the prosecution unsuccessfully argued that a woman posing as 16-year-old boy on MySpace had violated the site’s Terms of Service prohibiting users to lie about their age or name and therefore had violated the CFAA. In this case, the woman used the fake account to develop an online relationship with one of her daughter’s classmates, a 13-year-old who later committed suicide shortly after the fake 17-year old told her harsly he di not want to be friends with her anymore. Cyber-bullying is a serious issue, and may have tragic consequences.

Plaintiff described in his first complaint the teenagers who created the fake social media accounts as bullies. But how would you describe an educator who claimed that some of the children he is in charge of educating should face prison sentences because they mocked him?

Plaintiff went after Defendants, minors who were students at the school where he is assistant principal, with a very heavy hand, even claiming  in the amended complaint, that he should be granted leave to state a claim under the Racketeer Influenced and Corrupt Organization Act (RICO) in case his claim under the CFAA would be dismissed…

As noted by Magistrate Judge Coffin in one of the two Findings and Recommendations that he filed, Congress enacted RICO to address organized crime and its economic consequences. Judge Coffin wrote that “Congress did not intend to target the misguided attempts at retribution by juvenile middle school students against an assistant principal in enacting RICO.”

While it may seem laughable that a federal law targeting organized crime should be a remedy for a (very) stupid student prank, I am also concerned about the negative consequence that such a suit could have on teenagers. Being sued in court under a federal criminal statute can be very scary, even for an adult.

Also, Plaintiff had asked the court for damages, punitive damages, limitation on the defendant’s Internet use and “forfeiture of equipment,’ which, I assume, are computers and/or hand-held devices used to create the fake accounts. He also asked the court to enjoin the teenagers from participating on social networking sites, at least for a “reasonable” period of time. Such a ruling would arguably have run afoul of the First Amendment.

Plaintiff also tried to characterize the material posted online as “pornographic and obscene material of a prurient nature.” Such speech is not protected by the First Amendment, but the Court was probably not convinced that the posts were pornographic. Plaintiff could have been more successful at pursuing a defamation claim in State court, but since the case was dismissed, one can only speculates on how a court would have ruled in this case.

Image is Computer on Fire courtesy of Flickr user Matt Mets pursuant to a CC BY 2.0 license.

 

Facebooktwitterredditpinterestlinkedinmailby feather

SCA Protects Privacy of Non-Public Facebook Wall Posts

5454778149_620c36dbfeThe U.S. District Court for the District of New Jersey ruled on August 20 that non-public Facebook wall posts are covered by the Stored Communications Act (SCA). However, the authorized user exception applied in this case, as a colleague who had legally access to Plaintiff’s Facebook wall had forwarded the controversial  post, unsolicited, to their common employer.

The case is Ehling v. Monmouth-Ocean Hospital.

Plaintiff Deborah Ehling was as registered nurse and paramedic working for Defendant Monmouth-Ocean Hospital Service Corp. (MONOC). In June 2009, she posted a message on her account, implying that the paramedics who took care of the man who had killed a security guard outside the U.S. Holocaust Memorial Museum in 2009 should have let him die.

The privacy settings of Ehling’s Facebook account limited access to her Facebook wall to her Facebook friends. No MONOC managers were her Facebook friends, but several of her MONOC coworkers were, including Tim Ronco, who apparently decided on his own to provide screenshots of the controversial Ehling’s Facebook post to a MONOC manager.

Plaintiff was then suspended with pay and was told that her comment reflected a “deliberate disregard for patient safety.” Plaintiff then filed a complaint with the National Labor Relations Board (NLRB) which found that MONOC had not violated Ehling’s privacy as the post had been sent unsolicited to management.

Plaintiff was eventually fired, and filed an action against MONOC,  but the court granted defendant’s motion for summary judgment. I will only talk about the violation of the SCA.

Stored Communications Act

Plaintiff claimed that Defendant had violated the SCA when accessing the messages posted on her Facebook wall.

The SCA, 18 U.S.C. § 2701, is part of the Electronic Communications Privacy Act (ECPA) of 1986. The SCA forbids unlawful access to stored communications, that is,“(1)intentionally access[ing]s without authorization a facility through which an electronic communication service is provided; or (2) intentionally exceed[ing] an authorization to access that facility.”

According to the Ehling court, Facebook wall posts are indeed electronic communications as defined by 18 U.S.C. § 2510(12) that is,“any transfer of signs, signals, writing, images, sounds, data, or intelligence of any nature transmitted in whole or in part by a wire, radio, electromagnetic, photoelectronic or photooptical system that affects interstate or foreign commerce.”

Facebook’s users transmit data over the Internet, from their devices to Facebook’s servers, and thus their posts are electronic communications within the meaning of the SCA. No new issue here. The New Jersey court cited the 2010 Central District Court of California case, Crispin v. Audigier, which stated that Facebook and MySpace were electronic communications providers. Again, nothing new.

Finally, Facebook posts are saved on its servers indefinitely, thus backing them up. Therefore, Facebook wall posts are in electronic storage within the meaning of the SCA, 18 U.S.C. § 2510(17)(B), which defines the storage of electronic communications for purposes of backing them up. We are all set, the SCA applies in this case.

Public Electronic Communications/Private Electronic Communications

The controversial issue was instead whether Plaintiff’s Facebook posts were public or private. This is important, as the ECPA only protects private communications. The Ninth Circuit had noted in the 2002 case Konop v. Hawaaiian Airlines, Inc. that “the legislative history of the [SCA} suggests that Congress wanted to protect electronic communications that are configured to be private, such as email and private electronic bulletin boards.”

But a completely public BBS is not protected by the SCA. Indeed, the SCA legislator wrote that:

the bill does not for example hinder the development or use of electronic bulletin boards or other similar services where the availability of information about the service, and the readily accessible nature of the service are widely known and the service does not require any special access code or warning to indicate that the information is private. To access a communication in such a public system is not a violation of the Act, since the general public has been ‘authorized’ to do so by the facility provider”. (S. REP. NO. 99-541, at 36)

Konop was cited in Crispin v. Audigier, where the court reasoned that “there is no basis for distinguishing between a restricted-access BBS and a user’s Facebook wall or MySpace comments.” The New Jersey District court cited Audigier to conclude that non-public Facebook wall postings are covered by the SCA.

As the privacy settings of Plaintiff’s Facebook account prevented non-Facebook friends to access the messages on her wall, these messages were not really “public” and therefore the SCA applied to them. However, the authorized user exception of the SCA applied in this case.

Why the SCA’s Authorized User Applied in this Case

There is no liability under the SCA if access “[is] authorized … by a user of that service with respect to a communication of or intended for that user,” 18 U.S.C. § 2701(c)(2).

The court cited its own 2009 Pietrylo v. Hillstone Rest. Grp. case which had found that there is no violation of the SCA if the access to an electronic communication has been authorized. In the Pietrylo case, the manager of a restaurant had accessed the MySpace account of an employee, accessible only by invitation, by asking another employee to provide him the password. In Ehling, one of Plaintiff’s colleagues had voluntarily forwarded the electronic communication to the employer “without any coercion or pressure.” Therefore, the access was authorized. The difference is there, asking/coercing for access, or learn about the communication from an unsolicited third party.

Take away

Case law is consistent in this issue. While employers should not coerce or pressure employees to provide them access to the social media account of another employee, it is not illegal for them, under the SCA, to access a social media post if a third party willingly shares this information with them.

As for providing access to one’s own social media account to one’s employer, New Jersey recently enacted a law prohibiting employers to ask for user names, passwords, or other means for accessing employee’s electronic communications devices. Several states have similar laws, but New York is not one of them yet.

Image is Facebook wall courtesy of Flickr user Marcin Wichary pursuant to a CC BY 2.0 license.

 

Facebooktwitterredditpinterestlinkedinmailby feather