
Since the mid-90s, Section 230 (47 U.S.C. §230) has functioned as a kind of legal safety net for internet platforms, shielding them from lawsuits and liability tied to user-generated content. Many companies have built entire business models with that protection in mind. And one might reasonably conclude that social media would not be as omnipresent today but for the protections of Section 230 which allowed the ecosystem to develop.
But a recent decision from Massachusetts’ highest court suggests that this protection is narrower than many assume, and that courts are increasingly willing to look beyond content and into how platforms are actually designed and operated.
Quick overview of the dispute
The Commonwealth of Massachusetts brought suit against Meta, alleging that Instagram was intentionally designed to keep young users engaged in ways that could be harmful. The complaint focused on features such as infinite scrolling, constant notifications, and algorithmic content delivery — all tools that encourage users, particularly teenagers, to spend more time on the platform.
At the same time, the state alleged that Meta publicly downplayed or misrepresented the risks associated with Instagram, presenting it as safe while internal research suggested otherwise. It also claimed that Meta failed to effectively enforce age restrictions, despite representing that it did.
Meta’s response was predictable: it argued that Section 230 bars these claims entirely.
What the court decided
The Supreme Judicial Court of Massachusetts disagreed. While it acknowledged that Section 230 can provide powerful protection, including immunity from having to litigate certain claims at all, it concluded that the statute does not apply here, at least at the motion to dismiss stage.
The court allowed the case to proceed, holding that the Commonwealth’s claims were not based on third-party content, but rather on Meta’s own conduct — its design choices, its business practices, and its public statements.
Why the court reached this decision
At the heart of the court’s reasoning is a distinction that is becoming increasingly important in technology litigation: the difference between liability for content and liability for conduct.
Section 230 was designed to prevent platforms from being treated as the publisher of harmful content created by users. But the court emphasized that this protection has limits. It does not extend to situations where a company is being held accountable for its own actions, particularly where those actions involve how a product is designed or how risks are communicated to users.
Here, the claims did not depend on what users posted on Instagram. Instead, they focused on how the platform itself is structured. The alleged harm flowed from features that encourage compulsive use, not from any specific piece of content. In that sense, the platform’s design (and not its role as a publisher) was being challenged.
The court applied the same reasoning to the Commonwealth’s deception claims. The court made clear that Section 230 does not shield a company from liability for its own statements. If a company represents that its product is safe or non-addictive, and those representations are alleged to be misleading, those claims stand on their own, independent of any user-generated content.
Commonwealth v. Meta Platforms, Inc., — N.E.3d —, 2026 WL 969430 (Mass. Apr. 10, 2026)







