Can a person be liable for retweeting a defamatory tweet?

section 230 user retweet defamatory

Under traditional principles of defamation law, one can be liable for repeating a defamatory statement to others. Does the same principle apply, however, on social media such as Twitter, where one can easily repeat the words of others via a retweet?

Hacking, tweet, retweet, lawsuit

A high school student hacked the server hosting the local middle school’s website, and modified plaintiff’s web page to make it appear she was seeking inappropriate relationships. Another student tweeted a picture of the modified web page, and several people retweeted that picture.

The teacher sued the retweeters for defamation and reckless infliction of emotional distress. The court dismissed the case, holding that 47 USC §230 immunized defendants from liability as “users” of an interactive computer service. Plaintiff sought review with the New Hampshire Supreme Court. On appeal, the court affirmed the dismissal.

Who is a “user” under Section 230?

Section 230 provides, in relevant part, that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. Importantly, the statute does not define the word “user”. The lower court held that defendant retweeters fit into the category of “user” under the statute and therefore could not be liable for their retweeting, because to impose such liability would require treating them as the publisher or speaker of information provided by another.

Looking primarily at the plain language of the statute, and guided by the 2006 California case of Barrett v. Rosenthal, the state supreme court found no basis in plaintiff’s arguments that defendants were not “users” under the statute. Plaintiff had argued that “user” should be interpreted to mean libraries, colleges, computer coffee shops and others who, “at the beginning of the internet” were primary access points for people. And she also argued that because Section 230 changed common law defamation, the statute must speak directly to immunizing individual users.

The court held that it was “evident” that Section 230 abrogated the common law of defamation as applied to individual users. “That individual users are immunized from claims of defamation for retweeting content they did not create is evident from the statutory language. ”

Banaian v. Bascom, — A.3d —, 2022 WL 1482521 (N.H. May 11, 2022)

See also:

Old social media posts violated trade dress infringement injunction

social media trade dress
The parties in the case of H.I.S.C., Inc. v. Franmar are competitors, each making garden broom products. In earlier litigation, the defendant filed a counterclaim against plaintiff for trade dress infringement, and successfully obtained an injunction against plaintiff, prohibiting plaintiff from advertising brooms designed in a certain way. Defendant asked the court to find plaintiff in contempt for, among other reasons, certain social media posts that plaintiff posted before the injunction, but that still remained after the injunction was entered. The court agreed that the continuing existence of such posts was improper and found plaintiff in contempt for having violated the injunction.

The court noted that the injunction prohibited “[a]dvertising, soliciting, marketing, selling, offering for sale or otherwise using in the United States the [applicable product trade dress] in connection with any garden broom products.” It observed that “[o]n the Internet and in social media, a post from days, weeks, months, or even years ago can still serve to advertise a product today.” The court cited to Ariix, LLC v. NutriSearch Corp., 985 F.3d 1107, 1116 n.5, in which that court noted that one prominent influencer receives $300,000 to $500,000 for a single Instagram post endorsing a company’s product – a sum surely including both the post itself and an agreement to continue allowing the post to be visible to consumers for a substantial duration of time. Interestingly, the court found that the nature of a social media post may be different from a television or radio advertisement that has a fixed air date and time. Accordingly, the court found that it was inappropriate for social media posts published before the injunction to stay online.

H.I.S.C., Inc. v. Franmar Int’l Importers, Ltd., 2022 WL 104730 (S.D. Cal. January 11, 2022)

See also:

Executive order to clarify Section 230: a summary

Section 230 executive order

Late yesterday President Trump took steps to make good on his promise to regulate online platforms like Twitter and Facebook. He released a draft executive order to that end. You can read the actual draft executive order. Here is a summary of the key points. The draft order:

  • States that it is the policy of the U.S. to foster clear, nondiscriminatory ground rules promoting free and open debate on the Internet. It is the policy of the U.S. that the scope of Section 230 immunity should be clarified.
  • Argues that a platform becomes a “publisher or speaker” of content, and therefore not subject to Section 230 immunity, when it does not act in good faith to to restrict access to content (in accordance with Section 230(c)(2) that it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.” The executive order argues that Section 230 “does not extend to deceptive or pretextual actions restricting online content or actions inconsistent with an online platform’s terms of service.”
  • Orders the Secretary of Commerce to petition the FCC, requesting that the FCC propose regulations to clarify the conditions around a platform’s “good faith” when restricting access or availability of content. In particuar, the requested rules would examine whether the action was, among other things, deceptive, pretextual, inconsistent with the provider’s terms of service, the product of unreasoned explanation, or without meaningful opportunity to be heard.
  • Directs each federal executive department and agency to review its advertising and marketing spending on online platforms. Each is to provide a report in 30 days on: amount spent, which platforms supported, any viewpoint-based restrictions of the platform, assessment whether the platform is appropriate, and statutory authority available to restrict advertising on platforms not deemed appropriate.
  • States that it is the policy of the U.S. that “large social media platforms, such as Twitter and Facebook, as the functional equivalent of a traditional public forum, should not infringe on protected speech”.
  • Re-establishes the White House “Tech Bias Reporting Tool” that allows Americans to report incidents of online censorship. These complaints are to be forwarded to the DoJ and the FTC.
  • Directs the FTC to “consider” taking action against entities covered by Section 230 who restrict speech in ways that do not align with those entities’ public representations about those practices.
  • Directs the FTC to develop a publicly-available report describing complaints of activity of Twitter and other “large internet platforms” that may violate the law in ways that implicate the policy that these are public fora and should not infringe on protected speech.
  • Establishes a working group with states’ attorneys general regarding enforcement of state statutes prohibiting online platforms from engaging in unfair and deceptive acts and practices. 
  • This working group is also to collect publicly available information for the creation and monitoring of user watch lists, based on their interactions with content and other users (likes, follows, time spent). This working group is also to monitor users based on their activity “off the platform”. (It is not clear whether that means “off the internet” or “on other online places”.)

Influencer agreements: what needs to be in them

If you are a social media influencer, or are a brand looking to engage an influencer, you may need to enter into an influencer agreement. Here are five key things that should be in the contract between the influencer and the brand: 

  • Obligations 
  • Payment 
  • Content ownership 
  • Publicity rights 
  • Endorsement guidelines compliance 

Obligations under the influencer agreement.

The main thing that a brand wants from an influencer is for the influencer to say certain things about the brand’s products, in a certain way, and at certain times. What kind of content? Photos? Video? Which platforms? What hashtags? When? How many posts? The agreement should spell all these things out.

Payment.

Influencers are compensated in a number of ways. In addition to getting free products, they may be paid a flat fee upfront or from time to time. And it’s also common too see a revenue share arrangement. That is, the influencer will get a certain percentage based on sales of the products she is endorsing. These may be tracked by a promo code. The contract should identify all these amounts and percentages, and the timing for payment.

So what about content ownership? 

The main work of an influencer is to generate content. This could be pictures posted to Instagram, tweets, or video posted to her story. All that content is covered by copyright. Unless the contract says otherwise, the influencer will own the copyright. If the brand wants to do more with that content outside of social media, that needs to be addressed in the influencer agreement.

And then there are rights of publicity. 

Individuals have the right to determine how their image and name are used for commercial purposes. If the brand is going to feature the influencer on the brand’s own platform, then there needs to be language that specifies the limits on that use. That’s key to an influencer who wants to control her personal brand and reputation. 

Finally, endorsement guidelines and the influencer agreement. 

The federal government wants to make sure the consuming public gets clear information about products. So there are guidelines that influencers have to follow. You have to know what these guidelines are to stay out of trouble. And the contract should address what happens if these guidelines aren’t followed.

See also: When is it okay to use social media to make fun of people?

About the author: Evan Brown is an attorney helping individuals and businesses with a wide variety of agreements involving social media, intellectual property and technology. Call him at (630) 362-7237 or send email to [email protected]

Twitter account hacked, chaos ensued, but no legal claims stuck

Plaintiff owned a Twitter account and operated a related blog. Plaintiff used the Twitter account to drive traffic and revenue to the blog, conduct business via direct messages, and promote its brand, for which it claimed common law trademark rights.

Unknown hackers took control of plaintiff’s Twitter account by changing the email address associated with it. This locked plaintiff out. Plaintiff contacted Twitter several times to report the hack, but Twitter declined to take action because the complaints did not come from the email address then associated with the account. Plaintiff had not enabled Twitter’s two-factor authentication feature and claimed Twitter failed to adequately inform users about it.

While in control of the account, the hackers used plaintiff’s credit card without permission to purchase 93,000 promoted tweets and posted spam messages, including ones falsely advertising that the account was for sale and offering free iPhones. Plaintiff submitted refund requests to Twitter, but claimed Twitter’s process was broken. Twitter did not restore plaintiff’s access until after the lawsuit was filed.

So let’s sue Twitter

Plaintiff sued Twitter for multiple claims, including contributory trademark infringement, breach of contract, negligence, breach of bailment, and unfair competition. The court granted Twitter’s motion to dismiss all claims, though some were dismissed with permission for plaintiff to amend.

How the court ruled

Plaintiff claimed contributory trademark infringement, arguing that Twitter allowed the hackers to control plaintiff’s account after being notified of the hack, which led to misuse of plaintiff’s mark. The court dismissed this claim because plaintiff did not allege that Twitter had actual or constructive knowledge that trademark infringement was occurring. Simply receiving reports of a hack was not enough to show that Twitter knew or should have known the account’s use was infringing a trademark.

For the breach of contract claim, plaintiff pointed to Twitter’s Terms of Service (TOS), asserting that Twitter had obligations to maintain access and protect content. The court found that the TOS provisions cited did not amount to enforceable promises about uninterrupted access or account security. The court also rejected plaintiff’s claim that Twitter had breached an implied contract, finding no facts showing Twitter made any implied promises. Finally, the court dismissed the breach of the implied covenant of good faith and fair dealing because it was based on the same allegations as the breach of contract claim and added nothing new.

In its negligence and recklessness claim, plaintiff argued that Twitter failed to use reasonable care in securing accounts and responding to the hack. The court rejected this claim for three reasons. First, plaintiff failed to show that Twitter owed a legal duty separate from the contract. Second, the only damages alleged were economic losses, which are barred in negligence cases unless accompanied by personal or property harm. Third, the negligence allegations were nearly identical to the contract claims, making the tort claim impermissibly duplicative.

Plaintiff also alleged a breach of the duty of bailment, claiming Twitter had custody of its credit card information and private messages. The court rejected this claim, stating that digital content and payment information did not qualify as personal property that could be “delivered” and then returned, as required for a bailment. The court also noted this claim was duplicative of the breach of contract and negligence claims.

Under California’s unfair competition law, plaintiff asserted that Twitter engaged in unlawful and unfair practices. The court dismissed this claim because it was entirely derivative of the other claims. Since none of those underlying claims were properly pled, the unfair competition claim also failed.

Finally, the court dismissed plaintiff’s request for declaratory judgment, which asked for a declaration of ownership over its blog, domain, and Twitter account. The court explained that declaratory judgment is not a standalone legal claim and requires a viable underlying claim, which plaintiff had not presented.

Worldwide Media, Inc. v. Twitter, Inc., 2018 WL 5304852 (N.D. Cal., October 24, 2018)

California Supreme Court rejects Yelp takedown injunction

section 230 immunity

The Supreme Court of California ruled that Yelp could not be ordered to remove defamatory reviews from its website because federal law immunized it from that form of relief.

Plaintiffs Dawn Hassell and the Hassell Law Group sued defendant Ava Bird for defamation, false light, and intentional infliction of emotional distress based on negative Yelp reviews that plaintiffs alleged Bird had posted about the law firm. After Bird did not appear, plaintiffs obtained a default judgment awarding damages and an injunction requiring defendant to remove the reviews, and the order also directed Yelp, which was not a defendant, to remove the reviews from its website.

Yelp’s request

Yelp asked the court to set aside or modify the default judgment so it would no longer require Yelp to remove the reviews. It argued that the order violated due process and was barred by Section 230 of the Communications Decency Act because Yelp had not been sued as a defendant and could not be treated as the publisher of third-party content.

Court’s ruling

The court ruled that Yelp was entitled to immunity under Section 230 and that the order had to be revised to delete any requirement that Yelp remove the challenged reviews or later comments by the reviewers. It therefore reversed the Court of Appeal insofar as that court had upheld the denial of Yelp’s motion.

Why the court ruled as it did

The court ruled this way because ordering Yelp to remove the reviews improperly treated it as the publisher of information provided by another content provider, which Section 230 forbids. The court reasoned that plaintiffs could not avoid that immunity by suing only defendant and then obtaining an injunction that indirectly compelled Yelp to take down third-party content, and because the statutory issue resolved the case, the court did not reach Yelp’s due process argument.

Hassell v. Bird, 5 Cal. 5th 522 (Cal., July 2, 2018)

Police not required to publicly disclose how they monitor social media accounts in investigations

In the same week that news has broken about how Amazon is assisting police departments with facial recognition technology, here is a decision from a Pennsylvania court that held police do not have to turn over details to the public about how they monitor social media accounts in investigations.

The ACLU sought a copy under Pennsylvania’s Right-to-Know Law of the policies and procedures of the Pennsylvania State Police (PSP) for personnel when using social media monitoring software. The PSP produced a redacted copy, and after the ACLU challenged the redaction, the state’s Office of Open Records ordered the full document be provided. The PSP sought review in state court, and that court reversed the Office of Open Records order. The court found that disclosure of the record would be reasonably likely to threaten public safety or a public protection activity.

The court found in particular that disclosure would: (i) allow individuals to know when the PSP can monitor their activities using “open sources” and allow them to conceal their activities; (ii) expose the specific investigative method used; (iii) provide criminals with tactics the PSP uses when conducting undercover investigations; (iv) reveal how the PSP conducts its investigations; and (v) provide insight into how the PSP conducts an investigation and what sources and methods it would use. Additionally, the court credited the PSP’s affidavit which explained that disclosure would jeopardize the PSP’s ability to hire suitable candidates – troopers in particular – because disclosure would reveal the specific information that may be reviewed as part of a background check to determine whether candidates are suitable for employment.

Pennsylvania State Police v. American Civil Liberties Union of Pennsylvania, 2018 WL 2272597 (Commonwealth Court of Pennsylvania, May 18, 2018)

About the Author: Evan Brown is a Chicago technology and intellectual property attorney. Call Evan at (630) 362-7237, send email to ebrown [at] internetcases.com, or follow him on Twitter @internetcases. Read Evan’s other blog, UDRP Tracker, for information about domain name disputes.

Ninth Circuit upholds decision in favor of Twitter in terrorism case

Tamara Fields and Heather Creach, representing the estates of their late husbands and joined by Creach’s two minor children, sued Twitter, Inc. Plaintiffs alleged that the platform knowingly provided material support to ISIS, enabling the terrorist organization to carry out the 2015 attack in Jordan that killed their loved ones. The lawsuit sought damages under the Anti-Terrorism Act (ATA), which allows U.S. nationals injured by terrorism to seek compensation.

Plaintiffs alleged that defendant knowingly and recklessly provided ISIS with access to its platform, including tools such as direct messaging. Plaintiffs argued that these services allowed ISIS to spread propaganda, recruit followers, raise funds, and coordinate operations, ultimately contributing to the attack. Defendant moved to dismiss the case, arguing that plaintiffs failed to show a direct connection between its actions and the attack. Defendant also invoked Section 230 of the Communications Decency Act, which shields platforms from liability for content created by users.

The district court agreed with defendant and dismissed the case, finding that plaintiffs had not established proximate causation under the ATA. Plaintiffs appealed, but the Ninth Circuit upheld the dismissal. The appellate court ruled that plaintiffs failed to demonstrate a direct link between defendant’s alleged support and the attack. While plaintiffs showed that ISIS used defendant’s platform for various purposes, the court found no evidence connecting those activities to the specific attack in Jordan. The court emphasized that the ATA requires a clear, direct relationship between defendant’s conduct and the harm suffered.

The court did not address defendant’s arguments under Section 230, as the lack of proximate causation was sufficient to resolve the case. Accordingly, this decision helped clarify the legal limits of liability for platforms under the ATA and highlighted the challenges of holding technology companies accountable for how their services are used by third parties.

Three Reasons Why This Case Matters:

  • Sets the Bar for Proximate Cause: The ruling established that a direct causal link is essential for liability under the Anti-Terrorism Act.
  • Limits Platform Liability: The decision underscores the difficulty of holding online platforms accountable for misuse of their services by bad actors.
  • Reinforces Section 230’s Role: Although not directly addressed, the case highlights the protections Section 230 offers to tech companies.

Fields v. Twitter, Inc., 881 F.3d 739 (9th Cir. 2018)

Pastor’s First Amendment rights affected parole conditions barring social media use

Plaintiff – a Baptist minister on parole in California – sued several parole officials, arguing that conditions placed on his parole violated plaintiff’s First Amendment rights. Among the contested restrictions was a prohibition on plaintiff accessing social media. Plaintiff claimed this restriction infringed on both his right to free speech and his right to freely exercise his religion. Plaintiff asked the court for a preliminary injunction to stop the enforcement of this condition. The court ultimately sided with plaintiff, ruling that the social media ban was unconstitutional.

The Free Speech challenge

Plaintiff argued that the parole condition prevented him from sharing his religious message online. As a preacher, he relied on platforms such as Facebook and Twitter to post sermons, connect with congregants who could not attend services, and expand his ministry by engaging with other pastors. The social media ban, plaintiff claimed, silenced him in a space essential for modern communication.

The court agreed, citing the U.S. Supreme Court’s ruling in Packingham v. North Carolina, which struck down a law barring registered sex offenders from using social media. In Packingham, the Court emphasized that social media platforms are akin to a modern public square and are vital for exercising free speech rights. Similarly, the court in this case found that the blanket prohibition on social media access imposed by the parole conditions was overly broad and not narrowly tailored to address specific risks or concerns.

The court noted that plaintiff’s past offenses, which occurred decades earlier, did not involve social media or the internet, undermining the justification for such a sweeping restriction. While public safety was a legitimate concern, the court emphasized that parole conditions must be carefully tailored to avoid unnecessary burdens on constitutional rights.

The Free Exercise challenge

Plaintiff also argued that the social media ban interfered with his ability to practice his religion. He asserted that posting sermons online and engaging with his congregation through social media were integral parts of his ministry. By prohibiting social media use, the parole condition restricted his ability to preach and share his faith beyond the physical boundaries of his church.

The court found this argument compelling. Religious practice is not confined to in-person settings, and plaintiff demonstrated that social media was a vital tool for his ministry. The court noted that barring a preacher from using a key means of sharing religious teachings imposed a unique burden on religious activity. Drawing on principles from prior Free Exercise Clause cases, the court held that the parole condition was not narrowly tailored to serve a compelling government interest, as it broadly prohibited access to all social media regardless of its religious purpose.

The court’s decision

The court granted plaintiff’s request for a preliminary injunction, concluding that he was likely to succeed on his claims under both the Free Speech Clause and the Free Exercise Clause of the First Amendment. The ruling allowed plaintiff to use social media during the litigation, while acknowledging the government’s legitimate interest in monitoring parolees. The court encouraged less restrictive alternatives, such as targeted supervision or limiting access to specific sites that posed risks, rather than a blanket ban.

Three reasons why this case matters:

Intersection of Speech and Religion: The case highlights how digital tools are essential for both free speech and the practice of religion, especially for individuals sharing messages with broader communities.

Limits on Blanket Restrictions: The ruling reaffirms that government-imposed conditions, such as parole rules, must be narrowly tailored to avoid infringing constitutional rights.

Modern Application of First Amendment Rights: By referencing Packingham, the court acknowledged the evolving role of social media as a platform for public discourse and religious expression.

Manning v. Powers, 281 F. Supp. 3d 953 (C.D. Cal. Dec. 13, 2017)

Ownership of domain names and social media accounts a key issue in case

Plaintiff sued defendant for unauthorized use of domain names and social media accounts. Plaintiff asked the court to declare its rights to these digital assets and to hold defendant accountable for trademark infringement and other claims. The court decided to allow some claims to proceed while dismissing others based on New York law’s treatment of intangible property.

Plaintiff, a luxury grooming and fragrance company operating under the name MiN New York, hired defendant, Mindy Yang, through her company Superego Management LLC, to manage marketing and social media efforts. After the business relationship ended, plaintiff alleged that defendant retained control of website domains and social media accounts. Defendant allegedly redirected these assets to promote its new business, even using plaintiff’s accounts to advertise its own events.

Defendant argued that the claims for replevin, conversion, and trespass should be dismissed because domain names and social media accounts are intangible and not considered property under New York law. Defendant also sought dismissal of the breach of fiduciary duty claim, asserting that as an independent contractor, it did not owe fiduciary obligations to plaintiff.

The court partially agreed with defendant. It dismissed the trespass claim, finding that plaintiff failed to show harm to the online assets themselves. However, the court allowed plaintiff’s claims for replevin and conversion to proceed, ruling that domain names and social media accounts can qualify as property under New York law. The court recognized that these assets were crucial to plaintiff’s business and plausibly alleged to have been wrongfully controlled by defendant.

On the claim for breach of fiduciary duty, the court ruled in plaintiff’s favor. The court held that plaintiff sufficiently alleged that defendant, by accessing sensitive accounts, using a corporate credit card, and managing key aspects of plaintiff’s marketing, owed fiduciary duties despite being an independent contractor. This established that defendant had a responsibility to act in plaintiff’s best interests.

Three reasons why this case matters:

  • Addresses rights to digital assets: The court’s decision tends to confirm that domain names and social media accounts can be considered property under New York law.
  • Defines fiduciary duties for contractors: The ruling clarifies that independent contractors can owe fiduciary obligations when entrusted with significant responsibilities.
  • Offers a blueprint for online disputes: This case sets important standards for businesses seeking to reclaim control over misappropriated digital assets.

Salonclick LLC v. Superego Management LLC, 2017 WL 239379 (S.D.N.Y. Jan. 18, 2017).

Scroll to top