Social Media Jury Verdicts Ignite Internet Free Speech Debates

March 31, 2026, 9:00 AM UTC

An unlikely coalition of advocates opposing the wave of Big Tobacco-style litigation targeting social media companies over youth addiction is speaking out against a pair of landmark jury verdicts that last week found internet platforms legally responsible for harming teenagers.

The groups have reinvigorated an intense debate—frequently occurring on social media—about sound internet policy, First Amendment rights, and the tech industry’s legal shield known as Section 230.

Two jury decisions—a $6 million verdict against Meta Platforms Inc. and Google LLC in Los Angeles and a $375 million verdict against Meta in New Mexico—saw widespread applause from tech critics, consumer advocates, and plaintiffs-side attorneys who for years have fought an uphill battle to hold social media networks liable in court.

But just as quickly, an unusual alliance of libertarian-leaning First Amendment advocates and left-wing civil rights groups argued that the legal logic behind these trials will be detrimental to expression on the internet and could bolster legislation that will target minority groups and undermine privacy rights. The groups range from the advocates at the conservative Cato Institute to the progressive digital rights group Fight For the Future.

“It’s always been a weird coalition of people supporting free speech, right?” said Neeraja Deshpande, a policy analyst at Independent Women.

Talking About Mental Health

The fact that the two cases even reached trial spotlights a dramatic change in the legal landscape.

In the past, most courts have rejected lawsuits targeting internet platforms based on the legal protections of Section 230 of the federal Communications Decency Act, a 1996 law that prohibits lawsuits against platforms based on harmful content posted by users.

But plaintiff-side attorneys have managed to pierce that shield by arguing under traditional products liability theories that harms faced by teenagers, like depression, anxiety, and anorexia, flow from the design of addictive algorithms, not the content.

Critics of the verdicts say the cases are inescapably about content—an algorithm that serves only videos of TV static, for example, wouldn’t be as sticky for users. That’s why the cigarette analogy falls flat, said Ari Cohn, lead tech policy counsel for the Foundation for Individual Rights and Expression.

“A cigarette, when used as intended, will invariably and inherently kill you,” Cohn said. “The impact of speech on a person is so varied from individual to individual.”

The Los Angeles trial centered on a 20-year-old woman who started using YouTube when she was six and Instagram when she was nine—the same time her attorney Mark Lanier said at trial that internal documents show Meta was trying to maximize user time spent on its platforms.

Her lawyers argued she was vulnerable to the companies’ efforts to keep her attention glued to their sites. The companies argued that she used the platforms as an outlet to cope with offline stress.

Some observers warn of unintended consequences related to free speech.

If the verdict holding companies liable for harms experienced by users stands, it could lead websites “to effectively start to monitor for vulnerable users and serve them a different online experience,” said Jess Miers, assistant professor at the University of Akron School of Law.

That could mean removing content that touches on mental health in any form, including posts related to suicide prevention or open discussions about anxiety and depression, Miers said.

“The social media services don’t really have a good way currently of identifying what kinds of content will be specifically triggering” for a particular user, Miers said. “And so either they start tracking how users are using their services and serving them different experiences, or, I mean, even easier—they sort of take a broad axe to any kind of content that could present a risk.”

Trevor Timm, co-founder and executive director of Freedom of the Press Foundation, said he’s concerned the Los Angeles verdict’s logic could be applied to target news organizations that use algorithms to promote stories to readers. They could hypothetically be held liable because reading bad news is linked to poor mental health outcomes.

“I definitely don’t think social media is good for us, especially to be consuming in large quantities,” Timm said. “But it is possible that the solution to that problem can be worse than the harms that it’s producing.”

All Eyes on Appeal

Both Meta and Google have said they plan to appeal, and their challenges will allow them to attack the rulings that sent the cases to trial in the first place.

YouTube has said it’s not a social media platform, and a Meta spokesperson said it’s continually improving safety measures on its platforms, regardless of any verdict.

“I wouldn’t assume that the plaintiffs have a slam dunk victory on appeal,” said Eric Goldman, an internet law professor at Santa Clara University School of Law who has been an advocate for Section 230. “One scenario is that these trials get entirely vaporized.”

Just last year, the US Court of Appeals for the Ninth Circuit, the federal appellate court overseeing California and much of the American West, ruled that Section 230 immunizes the LGBTQ+ dating app Grindr from a defective design lawsuit. That case contained similar claims of Grindr’s failure to prevent sexual abuse on its platform when it allowed an underage boy to create an account and meet an adult who raped him.

The US Supreme Court has largely stayed away from the law. While it heard a Section 230 case in 2023 relating to YouTube and Twitter recommendations of terrorist propaganda, it ultimately punted on the issue and declined to make any substantive ruling.

But the lawsuits have drawn the interest of lawmakers as well: numerous states in the past few years have passed legislation regulating social media platforms, requiring measures such as age verification, which some privacy advocates say will require users to hand more data to tech companies.

“Even if the defendants find a magic way to eliminate these cases, they still have to contend with the statutes that the legislature has created,” Goldman said. “Social media is in trouble.”

—Madlin Mekelburg contributed to this story.

To contact the reporters on this story: Isaiah Poritz in San Francisco at iporitz@bloombergindustry.com; Maia Spoto in Los Angeles at mspoto@bloombergindustry.com

To contact the editors responsible for this story: Stephanie Gleason at sgleason@bloombergindustry.com; Alicia Cohn at acohn@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.