Social media addiction claims against Meta not barred by Section 230

Since the mid-90s, Section 230 (47 U.S.C. §230) has functioned as a kind of legal safety net for internet platforms, shielding them from lawsuits and liability tied to user-generated content. Many companies have built entire business models with that protection in mind. And one might reasonably conclude that social media would not be as omnipresent today but for the protections of Section 230 which allowed the ecosystem to develop.

But a recent decision from Massachusetts’ highest court suggests that this protection is narrower than many assume, and that courts are increasingly willing to look beyond content and into how platforms are actually designed and operated.

Quick overview of the dispute

The Commonwealth of Massachusetts brought suit against Meta, alleging that Instagram was intentionally designed to keep young users engaged in ways that could be harmful. The complaint focused on features such as infinite scrolling, constant notifications, and algorithmic content delivery — all tools that encourage users, particularly teenagers, to spend more time on the platform.

At the same time, the state alleged that Meta publicly downplayed or misrepresented the risks associated with Instagram, presenting it as safe while internal research suggested otherwise. It also claimed that Meta failed to effectively enforce age restrictions, despite representing that it did.

Meta’s response was predictable: it argued that Section 230 bars these claims entirely.

What the court decided

The Supreme Judicial Court of Massachusetts disagreed. While it acknowledged that Section 230 can provide powerful protection, including immunity from having to litigate certain claims at all, it concluded that the statute does not apply here, at least at the motion to dismiss stage.

The court allowed the case to proceed, holding that the Commonwealth’s claims were not based on third-party content, but rather on Meta’s own conduct — its design choices, its business practices, and its public statements.

Why the court reached this decision

At the heart of the court’s reasoning is a distinction that is becoming increasingly important in technology litigation: the difference between liability for content and liability for conduct.

Section 230 was designed to prevent platforms from being treated as the publisher of harmful content created by users. But the court emphasized that this protection has limits. It does not extend to situations where a company is being held accountable for its own actions, particularly where those actions involve how a product is designed or how risks are communicated to users.

Here, the claims did not depend on what users posted on Instagram. Instead, they focused on how the platform itself is structured. The alleged harm flowed from features that encourage compulsive use, not from any specific piece of content. In that sense, the platform’s design (and not its role as a publisher) was being challenged.

The court applied the same reasoning to the Commonwealth’s deception claims. The court made clear that Section 230 does not shield a company from liability for its own statements. If a company represents that its product is safe or non-addictive, and those representations are alleged to be misleading, those claims stand on their own, independent of any user-generated content.

Commonwealth v. Meta Platforms, Inc., — N.E.3d —, 2026 WL 969430 (Mass. Apr. 10, 2026)

Social media posts about divorce caused man to be found in contempt of court

social media ban

The Court of Appeals of Wisconsin affirmed an order holding a husband in contempt for violating a family-court ban on social media posts about his divorce case and the parties’ children.

Petitoner-wife asked the circuit court to hold respondent-husband in contempt, enforce the social media restrictions, and impose sanctions. She requested that the husband be punished for violating the court’s directives and that the court order remedial steps to stop further violations.

What the court ruled

The family court entered the requested relief and the husband sought review. On appeal, the court affirmed the contempt order. It held that the husband had not shown any error in the circuit court’s finding that he willfully violated the court’s oral and written directives barring social media posts about the children and the case. The court also declined to consider husband’s other challenges, including his First Amendment and overbreadth claims, because they were undeveloped and unsupported by legal authority and record citations.

Rationale

The appellate court concluded that the circuit court reasonably exercised its contempt power because the husband admitted posting a video after the oral ruling that barred posts about the children, and that recording included material from the hearing about custody-related recommendations. The court said this was enough notice and support for contempt. As to the rest of the appeal, the husband failed to present coherent, legally developed arguments, so those issues were not considered.

In re the marriage of Fox, 2026 WL 945666 (Wisc. Ct. App. April 7, 2026)

 

New Jersey court sends OKLegal.com suit against Meta to California

facebook meta terms of service

The United States District Court for the District of New Jersey transferred OKLegal.com’s lawsuit against Meta Platforms, Inc. and Mark Zuckerberg to the Northern District of California under Instagram’s forum-selection clause.

Plaintiff OKLegal.com sued defendant Meta Platforms, Inc. and defendant Mark Zuckerberg after Instagram indefinitely banned its account, (1) alleging the ban violated the First Amendment, (2) seeking a declaration that Section 230 is unconstitutional, and (3) asserting tortious interference with an advantageous business relationship.

Plaintiff asked court to keep the case in New Jersey and deny defendants’ request to enforce the Instagram Terms of Use forum-selection clause, while also opposing defendants’ alternative request to dismiss the complaint.

The court’s ruling

The court granted defendants’ motion in part by transferring the case to the United States District Court for the Northern District of California under 28 U.S.C. § 1404(a). It denied the remainder of the motion without prejudice, allowing defendants to raise their dismissal arguments in California.

Reasoning for the transfer

The court found plaintiff could have brought the action in the Northern District of California because Meta’s principal place of business and Zuckerberg’s residence are located there. It also found plaintiff agreed to Instagram’s Terms of Use when creating and using the account, and that those terms required non-arbitrated claims to be litigated exclusively in the Northern District of California or a state court in San Mateo County. The court rejected plaintiff’s arguments that the clause was unenforceable as clickwrap, adhesive, unconscionable, or contrary to public policy, concluding plaintiff failed to show that relevant public-interest factors overwhelmingly disfavored transfer.

OkLegal.com v. Meta Platforms, Inc., 2026 WL 850812 (D.N.J. Mar. 27, 2026)

SDNY lets tortious interference claim proceed in browser extension case

blockchain exploit

The U.S. District Court for the Southern District of New York largely trimmed a content creators’ suit over alleged affiliate commission hijacking by a browser extension, but let the tortious interference with contract claim proceed.

Plaintiffs, a group of online content creators, sued defendants RetailMeNot, Inc. and Ziff Davis, Inc., alleging that defendant’s browser extension wrongfully overwrote plaintiffs’ affiliate tracking codes and diverted sales commissions that plaintiffs otherwise would have received from merchants.

Motion to dismiss

Defendants asked the court to dismiss the case, arguing that plaintiffs lacked Article III standing and that the complaint failed to state any viable claims, including common law, computer fraud, and consumer protection claims.

The court ruled that plaintiffs had standing to sue and denied dismissal on that ground, but it otherwise granted the motion in substantial part and dismissed all claims except plaintiffs’ claim for intentional interference with contractual relations.

Why the tortious interference claim survived

The court found that the complaint plausibly alleged an actual and traceable injury through test purchases and statistical evidence, and that plaintiffs adequately pleaded that defendants knew of plaintiffs’ affiliate contracts and intentionally caused merchants to breach those contracts by crediting commissions to defendants instead of plaintiffs. But the court found the remaining claims deficient for reasons including failure to allege that plaintiff conferred a benefit on defendant, failure to plead interference with a prospective relationship rather than an existing contract, failure to identify convertible property, failure to show unauthorized computer access, and failure to allege the consumer harm required under the cited state statutes.

In re RetailMeNot Browser Extension Litigation, No. 25-CV-783, 2026 WL 820585 (S.D.N.Y., March 25, 2026)

Does the DMCA safe harbor cover infringing images in an email?

DMCA safe harbor for notifications

Plaintiff photographer sued Pinterest for copyright infringement, alleging Pinterest displayed his and other photographers’ copyrighted images in notifications sent outside of the Pinterest website. Pinterest moved for summary judgment, arguing it was protected under the safe harbor provisions of Section 512(c) of the Digital Millennium Copyright Act (“DMCA”). The court granted Pinterest’s motion and dismissed the case.

Pinterest is a familiar and massive social media platform, where individuals upload and share image-based “Pins” that function as visual bookmarks. The platform displays Pins in personalized feeds curated by algorithms and which contain advertisements labeled as “promoted.” Pinterest also delivers through notifications such as emails, in-app alerts, and push notifications, which contain hyperlinks that trigger display of images hosted on its servers. One such notification that plaintiff received included his copyrighted photograph, prompting him to file suit six days later.

The court found that Pinterest’s actions fell within the DMCA’s Section 512(c) safe harbor, which shields service providers from copyright liability for content stored at the direction of users. Because Pinterest raised this as an affirmative defense, it had the burden to prove every element of the safe harbor criteria, and the court concluded it had met both the statutory threshold and all required conditions.

Statutory threshold requirements under the DMCA

To qualify for the DMCA safe harbor, Pinterest had to meet several threshold statutory requirements that are found in Sections 512(c) and (i): it had to be a service provider, maintain a designated agent, implement a repeat infringer policy, and accommodate standard technical measures. The court found that Pinterest satisfied all four. As “one of the largest social media platforms in the world,” it operated a qualifying online platform as defined by the statute. The evidence showed that Pinterest maintained a registered agent with the Copyright Office and that it enforced a strike-based policy for repeat infringers. And the court found that Pinterest did not interfere with any recognized standard technical measures that plaintiff implemented with his works. (Plaintiff had asserted that he embedded certain metadata in his photographs, but he did not argue that this metadata qualified as a “standard technical measure” under the DMCA, nor did he claim that Pinterest interfered with it — in fact, he alleged that Pinterest preserved the metadata on its servers.)

How Pinterest met the required conditions

After finding that Pinterest satisfied the DMCA’s threshold requirements, the court turned to whether Pinterest’s conduct of sending out copyright protected images in off-platform notifications was protected under Section 512(c). To do so, Pinterest had to show three things:

  • the alleged infringement occurred due to user-directed storage;
  • Pinterest lacked actual or red flag knowledge of the infringement; and
  • Pinterest either had no right and ability to control the activity or did not receive a direct financial benefit from it.

The court evaluated each element in turn.

By reason of storage at the direction of a user

The court concluded that Pinterest met the first requirement for DMCA safe harbor protection: the alleged infringement occurred “by reason of the storage at the direction of a user.” It emphasized that the image at issue was not embedded in the notification itself but was instead hosted on Pinterest’s servers and accessed via a hyperlink contained in the notification. When a user opened the message, their software triggered a request to Pinterest’s server to retrieve and display the image, just as it would when accessing content directly through the platform. Because this method merely facilitated access to user-uploaded content without altering it, the court found the display was within the statutory definition.

No knowledge of infringement

The court found that Pinterest satisfied the second requirement for DMCA safe harbor protection by showing it lacked actual or red flag knowledge of the alleged infringement. Critically, Harrington never sent Pinterest a DMCA takedown notice or otherwise identified the allegedly infringing material before filing suit. The DMCA operates on a notice and takedown system: platforms are not required to proactively monitor user content but must respond once they receive proper notice. Because Harrington gave no such notice and offered no evidence that Pinterest otherwise knew about the specific image at issue, the court concluded there was no genuine dispute as to Pinterest’s lack of knowledge.

Control and financial benefit

The court found that Pinterest met the third and final requirement for DMCA safe harbor by showing it neither had the right and ability to control the alleged infringement nor received a financial benefit directly attributable to it. While Pinterest used algorithms to curate content and monetize its platform generally, the court held that this did not amount to the kind of “substantial influence” over user activity that would disqualify it under the DMCA. Pinterest did not direct users to upload specific content, nor did it participate in any purposeful conduct related to the display of plaintiff’s photo.

The court also rejected plaintiff’s claim that Pinterest profited directly from the infringement. Pinterest presented evidence that its notifications did not contain advertisements and that it earned no revenue specifically tied to the image in question. Plaintiff’s counter-evidence failed to show otherwise. Even if ads had appeared near the image, the law requires a direct connection between the infringing display and revenue, which was absent here. Therefore, Pinterest satisfied this final element of the DMCA safe harbor defense.

Harrington et al. v. Pinterest, Inc., No. 20-CV-5290, 2026 WL 25880 (N.D. Cal., January 5, 2026)

Ninth Circuit declines to impose broad injunction against California’s social media law for minors

social media ban

NetChoice, an internet trade association representing companies such as Google, Meta, and X, sued the State of California over its Protecting Our Kids from Social Media Addiction Act, claiming that the law violates the First Amendment. The Act restricts how social media platforms interact with minors, particularly limiting access to algorithmic feeds, requiring certain default settings, and mandating age-verification procedures.

Plaintiff asked the court to block enforcement of several provisions of the law through a preliminary injunction, focusing on its claims that aspects of the Act unlawfully restrict speech and are unconstitutionally vague. The lower court declined to issue the injunction. Plaintiff sought review with the Ninth Circuit.

On appeal, the Ninth Circuit largely affirmed the district court’s refusal to issue a broad injunction but ruled that the provision of the law requiring platforms to hide like and share counts by default for minors is unconstitutional. It reversed the lower court on that point and instructed it to modify its injunction to prevent enforcement of that specific provision.

The court ruled this way because it found the like-count requirement to be content-based and therefore subject to strict scrutiny under the First Amendment. The government failed to show that hiding like counts was the least restrictive means to achieve its goal of protecting minors’ mental health. Other provisions, including those governing private-mode settings and age verification, either survived scrutiny or were deemed unripe for review.

NetChoice LLC v. Bonta, — F.4th —, 2025 WL 2600007 (9th Cir. Sept. 9, 2025)

TikTok and Meta terms granted other users remix rights

tiktok copyright

Plaintiff sued TikTok and Meta after other users on those platforms incorporated clips from her video into their own posts, allegedly without her permission. She claimed this was copyright infringement and also alleged that TikTok failed to protect her from harassment by users in the comments of her live videos. Plaintiff filed the lawsuit on her own, without a lawyer.

Plaintiff asked the court to hold TikTok and Meta liable for copyright infringement and to consider tort claims against TikTok for harassment. But both companies responded by asking the court to dismiss the case. They pointed to the user agreements Plaintiff had accepted when she signed up. Those terms gave the platforms and their users broad rights to use, modify, and distribute any content she uploaded. TikTok also invoked immunity under 47 U.S.C. 230, a provision in federal law protecting platforms from liability for user-generated content.

The court agreed with the platforms. It found that plaintiff had granted TikTok and Meta valid licenses to use her video, so there could be no copyright violation. The court also ruled that it had no authority to hear the tort claims because plaintiff had not shown that the court had jurisdiction over those parts of the case. The court rejected plaintiff’s arguments that she did not fully understand the contracts or that the agreements were unfair. On appeal, the Tenth Circuit upheld the decision, finding no clear error in how the lower court handled the case and ruling that plaintiff had waived her right to challenge the licensing issue by not objecting to it specifically.

In the end, the court dismissed all claims against both companies. The court also declined to take up any new claims plaintiff tried to raise during the appeal, saying she had not brought those up earlier and did not support them with enough detail.

Three reasons why this case matters:

  • It reinforces how powerful and far-reaching social media terms of service can be in protecting platforms from copyright claims.

  • It shows the importance of making specific objections and arguments in court—especially during appeals.

  • It highlights how courts apply procedural rules strictly, even when someone is representing themselves without a lawyer.

Sethunya v. TikTok, 2025 WL 1144776 (10th Cir. April 18, 2025)

Did Facebook ads targeted at people under 50 unlawfully discriminate on the basis of age?

Several property management companies in the Washington, D.C. area advertised rental properties on Facebook, but only to users aged 50 and younger. Plaintiff, a 55-year-old woman, never saw these ads while searching for housing. She sued, claiming the companies discriminated against her based on age.

Plaintiff argued that by excluding users over 50 from seeing the ads, the companies deprived her of housing opportunities and information. She asked the court for a declaratory judgment, a permanent injunction, and damages. The district court dismissed the case, ruling that she lacked standing because she had not suffered a concrete injury. She sought review with the Fourth Circuit.

The appellate court upheld the dismissal. The court explained that to have standing, a plaintiff must show an injury that is real, personal, and specific. Plaintiff’s claim failed because she did not allege that she had directly been denied housing or misled by the defendants. She also did not prove that, even without age targeting, she would have seen the ads. Facebook’s algorithm determined ad distribution based on multiple factors, not just age. The court also rejected her argument that she suffered stigma from the companies’ ad practices, finding that she had not been personally affected in a way that would give her standing to sue.

Three reasons why this case matters:

  • Simply being part of a group that may have been treated unfairly is not enough; a plaintiff must show personal harm.
  • Businesses using demographic filters in online ads may be shielded from lawsuits unless a plaintiff can prove direct harm.
  • The ruling highlights that courts do not recognize speculative or abstract injuries as grounds for a lawsuit.

Opiotennione v. Bozzuto Mgmt. Co., 2025 WL 678636 (4th Cir. Mar. 4, 2025)

Scroll to top