A lawsuit claims that social media is a defective product.


A lawsuit claims that social media is a defective product.

A California court could soon decide whether social media firms need to pay — and change their ways — for the damage they’ve allegedly done to Americans’ mental health.

Plaintiffs’ lawyers plan to file a consolidated complaint in the Northern District of California next month, accusing the tech giants of making products that can cause eating disorders, anxiety and depression.

If the case is allowed to proceed, it will test a novel legal theory, that social media algorithms are defective products that encourage addictive behavior and are governed by existing product liability law. That could have far-reaching consequences for how software is developed and regulated, and how the next generation of users experiences social media.

It also could upstage members of Congress from both parties and President Joe Biden, who have called for regulation since former Facebook Product Manager Frances Haugen released documents revealing that Meta — Facebook and Instagram's parent company — knew users of Instagram were suffering ill health effects, but have failed to act in the 15 months since.

“Frances Haugen’s revelations suggest that Meta has long known about the negative effects Instagram has on our kids,” said Previn Warren, an attorney for Motley Rice and one of the leads on the case. “It’s similar to what we saw in the 1990s, when whistleblowers leaked evidence that tobacco companies knew nicotine was addictive.”


Meta hasn’t responded to the lawsuit’s claims, but the company has added new tools to its social media sites to help users curate their feeds, and CEO Mark Zuckerberg has said the company is open to new regulation from Congress.

The plaintiffs’ lawyers, led by Motley Rice, Seeger Weiss, and Lieff Cabraser Heimann & Bernstein, believe they can convince the judiciary to move first. They point to studies on the harms of heavy social media use, particularly for teens, and Haugen’s “smoking gun” documents.

Still, applying product liability law to an algorithm is relatively new legal territory, though a growing number of lawsuits are putting it to the test. In traditional product liability jurisprudence, the chain of causality is usually straightforward: a ladder with a third rung that always breaks. But for an algorithm, it is more difficult to prove that it directly caused harm.

Legal experts even debate whether an algorithm can be considered a product at all. Product liability laws have traditionally covered flaws in tangible items: a hair dryer or a car.

Case law is far from settled, but an upcoming Supreme Court case could chip away at one of the defense’s arguments. A provision of the 1996 Communications Act known as Section 230 protects social media companies by restricting lawsuits against the firms about content users posted on their sites. The legal shield Section 230 provides could safeguard the companies from the product liability claim.

The high court will hear oral arguments in the case of Gonzalez v. Google on Feb. 21. The justices will weigh whether or not Section 230 protects content recommendation algorithms. The case surrounds the death of Nohemi Gonzalez, who was killed by ISIS terrorists in Paris in 2015. The plaintiffs’ attorneys argue that Google’s algorithm showed ISIS recruitment videos to some users, contributing to their radicalization and violating the Anti-Terrorism Act.

If the court agrees, it would limit the wide-ranging immunity tech companies have enjoyed and potentially remove a barrier in the product liability case.

Congress and the courts

Since Haugen’s revelations, which she expanded on in testimony before the Senate Commerce Committee, lawmakers of both parties have pushed bills to rein in the tech giants. Their efforts have focused on limiting the firms’ collection of data about both adults and minors, reducing the creation and proliferation of child pornography, and narrowing or removing protections afforded under Section 230.

The two bills that have gained the most attention are the American Data Privacy and Protection Act, which would limit the data tech companies can collect about their users, and the Kids Online Safety Act, which seeks to restrict data collection on minors and create a duty to protect them from online harms.

However, despite bipartisan support, Congress passed neither bill last year, amid concerns about federal preemption of state laws.

Sen. Mark Warner (D-Va.), who has proposed separate legislation to reduce the tech firms’ Section 230 protections, said he plans to continue pushing: “We’ve done nothing as more and more watershed moments pile up.”

Some lawmakers have lobbied the Supreme Court to rule for Gonzalez in the upcoming case, or to issue a narrow ruling that might chip away at the scope of Section 230. Among those filing amicus briefs were Sens. Ted Cruz (R-Texas) and Josh Hawley (R-Mo.), as well as the states of Texas and Tennessee. In 2022, lawmakers in several states introduced at least 100 bills aimed at curbing content on tech company platforms.


Earlier this month, Biden penned an op-ed for The Wall Street Journal calling on Congress to pass laws that protect data privacy and hold social media companies accountable for the harmful content they spread, suggesting a broader reform. “Millions of young people are struggling with bullying, violence, trauma and mental health,” he wrote. “We must hold social-media companies accountable for the experiment they are running on our children for profit.”

The product liability suit offers another path to that end. Lawyers on the case say that the sites’ content recommendation algorithms addict users, and that the companies know about the mental health impact. Under product liability law, the lawyers say, the algorithms’ makers have a duty to warn consumers when they know their products can cause harm.

A plea for regulation

The tech firms haven’t yet addressed the product liability claims. However, they have repeatedly argued that eliminating or watering down Section 230 will do more harm than good. They say it would force them to dramatically increase censorship of user posts.

Still, since Haugen’s testimony, Meta has asked Congress to regulate it. In a note to employees he wrote after Haugen spoke to senators, CEO Mark Zuckerberg challenged her claims, but acknowledged public concerns.

“We’re committed to doing the best work we can,” he wrote, “but at some level the right body to assess tradeoffs between social equities is our democratically elected Congress.”

The firm backs some changes to Section 230, it says, “to make content moderation systems more transparent and to ensure that tech companies are held accountable for combating child exploitation, opioid abuse, and other types of illegal activity.”

It has introduced 30 tools on Instagram that it says makes the platform safer, including an age verification system.

According to Meta, teens under 16 are automatically given private accounts with limits on who can message them or tag them in posts. The company says minors are shown no alcohol or weight loss advertisements. And last summer, Meta launched a “Family Center,” which aims to help parents supervise their children’s social media accounts.

“We don’t allow content that promotes suicide, self-harm or eating disorders, and of the content we remove or take action on, we identify over 99 percent of it before it’s reported to us. We’ll continue to work closely with experts, policymakers and parents on these important issues,” said Antigone Davis, global head of safety at Meta.


TikTok has also tried to address disordered eating content on its platform. In 2021, the company started working with the National Eating Disorders Association to suss out harmful content. It now bans posts that promote unhealthy eating habits and behaviors. It also uses a system of public service announcement hashtags to highlight content that encourages healthy eating.

The biggest challenge, a spokesperson for the company said, is that the language around disordered eating and its promotion is constantly changing and that content that may harm one person, may not harm another.

Curating their feeds

In the absence of strict regulation, advocates for people with eating disorders are using the tools the social media companies provide.

They say the results are mixed and hard to quantify.

Nia Patterson, a regular social media user who’s in recovery from an eating disorder and now works for Equip, a firm that offers treatment for eating disorders via telehealth, has blocked accounts and asked Instagram not to serve up certain ads.

Patterson uses the platform to reach others with eating disorders and offer support.

But teaching the platform to not serve her certain content took work and the occasional weight loss ad still slips through, Patterson said, adding that this kind of algorithm training can be hard for people who have just begun to recover from an eating disorder or are not yet in recovery: “The three seconds that you watch of a video? They pick up on it and feed you related content.”

Part of the reason teens are so susceptible to social media’s temptations is that they are still developing. “When you think about teenagers, adolescents, their brain growth and development is not quite there yet,” said Allison Chase, regional clinical director at ERC Pathlight, an eating disorder clinic. “What you get is some really impressionable individuals.”

Jamie Drago, a peer mentor at Equip, developed an eating disorder in high school, she said, after becoming obsessed with a college dance team’s Instagram feed.

At the same time, she was seeing posts of influencers pushing three-day juice cleanses and smoothie bowls. She recalls experimenting with fruit diets and calorie restricting and then starting her own Instagram food account to catalog her own insubstantial meals.

When she thinks back on her experience and her social media habits, she recognizes that the problem she encountered isn’t because there’s anything inherently wrong with social media. It’s the way content recommendation algorithms repeatedly served her content that caused her to compare herself to others.

“I didn't accidentally stumble upon really problematic things on MySpace,” she said, referencing a social media site where she also had an account. Instagram’s algorithm, she said, was feeding her problematic content. “Even now, I stumble upon content that would be really triggering for me if I was still in my eating disorder.”

----------------------------------------

By: Ruth Reader
Title: Social media is a defective product, lawsuit contends
Sourced From: www.politico.com/news/2023/01/26/social-media-lawsuit-mental-illness-00079515
Published Date: Thu, 26 Jan 2023 04:30:00 EST

Read More