Tag Archives: content moderation

Liability for Recommendations

by Dennis Crouch

The US Supreme Court heard oral arguments today in the major internet-law case of Gonzalez v. Google, focusing on Section 230(c) of the Telecommunications Act of 1996.  That provision creates a wide safe harbor for internet service providers; shielding them from liability associated with publishing third-party content.  Section 230 fostered the dominant social media business model where almost all of the major internet media services rely primarily upon user-provided content.  Think YouTube, Instagram, Facebook, Twitter, TikTok, LinkedIn, etc.  Likewise, search engines like Google and Bing are essentially providing a concierge recommendation service for user-developed content and data.  The new AI models also work by using a large corpus of user-created data.  But, AI may be different since it is more  content-generative than most social-media.

The safe-harbor statute particularly states that the service provider will not be treated as the “publisher” of information content provided by someone else (“another information content provider.”)  47 U.S.C. 203(c).  At common law, a publisher could be held liable for publishing and distributing defamatory material, and the safe-harbor eliminates that potential liability.  Thus, if someone posts a defamatory YouTube video, YouTube (Google) won’t be held liable for publishing the video. (The person who posted the video could be held liable, if you can find him).

Liability for Recommending: In addition to publishing videos, all of the social media companies use somewhat sophisticated algorithms to recommend content to users. For YouTube, the basic idea is to keep users engaged for longer and thus increase advertising revenue.  The case before the Supreme Court asks whether the Section 230(c) safe harbor protects social media companies from liability when their recommendations cause harm.  If you have ever wasted an hour death-scrolling on TikTok, you can recognize that the the service provided was a steady stream of curated content designed to keep you watching. Each individual vid is something, but really you were latched-into the stream.  The question then is whether the safe-harbor statute excuses that entire interaction, or is it limited to each individual posting.

For me, in some ways it is akin to the Supreme Court’s struggle over 4th Amendment privacy interests related to cell-phone location information. While a single point of information might not be constitutionally protected; 127 days of information is an entirely different matter.  See Carpenter v. United States, 138 S.Ct. 2206 (2018).  Here, the safe harbor applies to a single video or posting by a user, but the sites compile and curate those into a steady stream that might also be seen as an entirely different matter.

Gonzalez’ child, Nohemi Gonzalez, was killed in the 2015 Paris terrorist attacks coordinated by ISIS.  In the lawsuit, Gonzales allege that YouTube is partially responsible because its algorithms provided tailor made recommendations of pro-ISIS videos to susceptible individuals who then participated in and supported the terrorist attacks that killed their child.  You may be thinking that Gonzales may have difficulty proving causation.  I think that is right, but the case was cut-short on Section 230 grounds before really reaching that issue.

The Ninth Circuit ruled in favor of Google, and the Supreme Court then agreed to hear the case on the following question:

Does section 230(c)(1) immunize interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limit the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information?

80+ briefs were filed with the Supreme Court arguing various positions.  This is a very large number for a Supreme Court case.  Many of the briefs argue that shrinking the scope of Section 230 would radically diminish the pluralism and generativity that we see online.  I might be OK with that if it gets TikTok out of my house.

As noted above, the plaintiffs case seems to lack some causal links, and in my view there is a very good chance that the court will decide the case on that grounds (via the sister case involving Twitter).  Justice Alito’s early question for petitioner highlights the problem.

Justice Alito: I’m afraid I’m completely confused by whatever argument you’re making at the present time.

I also appreciated Justice Sotomayor’s humility on behalf of the court.

Justice Sotomayor: We’re a court. We really don’t know about these things. These are not the nine greatest experts on the internet.

Congress passed a separate safe-harbor in the copyright context as part of the DMCA.  A key difference there was that copyright holders were able to lobby for more limits on the safe harbor. For instance, a social media company needs to take down infringing content once it is on notice. DCMA notice-and-takedown-provision.  Section 230 does not include any takedown requirements. Thus, even after YouTube is notified of defamatory or otherwise harmful content, it can keep the content up without risk of liability until specifically ordered to take it down by a court.  Oral arguments had some discussion about whether the algorithms were “neutral,” but the plaintiff’s counsel provided a compelling closing statement: “You can’t call it neutral once the defendant knows its algorithm is doing it.”

[Note – I apologize, I started writing this and accidentally hit publish too early.  A garbled post was up for about an hour while I was getting my haircut and eating breakfast.]