A SWATH OF potentially consequential cases for kids’ online safety are making their way through US courts. On Friday, the Massachusetts Supreme Judicial Court will hear arguments in a pivotal one, centered on whether the Commonwealth can continue to pursue its case against Meta, the tech giant that owns Facebook and Instagram.
Attorney General Andrea Campbell has accused Meta of designing their platforms to be addictive for minors, while falsely representing to the public that they prioritized youth health and safety. Meta is hoping to escape the lawsuit not by defending itself on the merits, but by arguing that a federal law known as Section 230 means that the company should be pre-emptively immune from legal challenges over how it designs its platforms.
How to hold social media companies accountable for their effects on our children, health, and safety continues to be hotly debated, while the companies themselves argue they bear no responsibility whatsoever for what happens on their platforms.
Over the last few years, state attorneys general and private litigators across the country have been prosecuting a set of these cases against Meta. One of the lawsuits recently revealed a trove of internal documents that depict Meta’s corporate culture as one of callous disregard for the health and safety of our children.
Section 230 says that courts cannot treat tech companies as the “publisher” of user-generated content. Ever since the law was passed in the 1990s, tech companies have tried to convince courts that Section 230 immunizes them from liability for any activity that can be characterized as “publishing”—which, the companies argue, encompasses almost everything they do.
The companies have met with varying degrees of success. But with the advent of sophisticated, highly designed platforms like Instagram, Snapchat, and TikTok, courts have increasingly rejected Big Tech’s characterization of Section 230 as a broad immunity shield.
And for good reason. As we explained in an amicus brief we filed in the Supreme Judicial Court with various non-profits and legal scholars, Congress didn’t intend Section 230 to be a catch-all, get-out-of-jail free card for Big Tech. It was meant to serve the limited but important purpose of preventing companies that host user-generated content online from facing unlimited liability for harm caused by that content—particularly when that liability is tied to a company’s choice to moderate the content on their platforms.
If companies had to ensure that their platforms were completely free from any content that could carry legal liability, they would be forced to either abandon content moderation, extensively monitor and censor user speech, or stop hosting user-generated content altogether, creating what we refer to as “the moderator’s dilemma.” All of these choices are bad for the online speech environment, which is why Section 230 prohibits forcing companies into the dilemma.
Holding Meta—and other tech companies—accountable for designing their platforms to be addictive for kids does not force Meta into the moderator’s dilemma. Our brief explains how the platform features at the heart of the Commonwealth’s case—things like infinite scroll, autoplay, the timing and batching of push notifications, and other tactics borrowed from the gambling industry—have nothing to do with content moderation; they are designed to elicit a behavior on the part of the user that furthers the company’s own business goals.
You don’t have to take our word for it. We examined companies’ patents, describing in their own words these goals—such as extended use or increased return visits—that are independent of the user-generated content displayed. We also detailed how Meta could choose alternative designs that mitigate the harm to kids without touching any user-generated content.
In short, applying Section 230 in this case would not protect online speech at all. The only effect would be to further incentivize Meta to disregard user safety as it designs its products to maximize profits.
Section 230 serves an important purpose. But that purpose is not served by applying it in this case. Not giving Meta Section 230 protections won’t break the internet. It won’t even mean that Meta is automatically liable. All it would mean is that the victims of real harm that might have been facilitated by an online company’s own design choices would have their day in court, something they deserve to have.
Megan Iorio is senior counsel at the Electronic Privacy Information Center, where she directs its program on platform governance and accountability; Laura Edelson is assistant professor of computer science at Northeastern University, whose research includes studies of social media design characteristics. Yaël Eisenstat is policy director at Cybersecurity for Democracy. She is the former head of global elections integrity for political ads at Facebook and became a leading whistleblower on how the company operates.
CommonWealth Voices is sponsored by The Boston Foundation.
The Boston Foundation is deeply committed to civic leadership, and essential to our work is the exchange of informed opinions. We are proud to partner on a platform that engages such a broad range of demographic and ideological viewpoints.

