Supreme Court Weighs Tech Company Protections in Google Case

Supreme Court Weighs Tech Company Protections in Google Case

WASHINGTON — In a case with the potential to alter the very structure of the internet, the Supreme Court on Tuesday explored the limits of a federal law that shields social media platforms from legal responsibility for what users post on their sites.

The justices seemed to view the positions taken by the two sides as too extreme and expressed doubts about their own competence to find a middle ground. “These are not the nine greatest experts on the internet,” Justice Elena Kagan said of the Supreme Court.

Others had practical concerns. Justice Brett M. Kavanaugh said the court should not “crash the digital economy.”

The case was brought by the family of Nohemi Gonzalez, a 23-year-old college student who was killed in a restaurant in Paris during the terrorist attacks in November 2015, which also targeted the Bataclan concert hall. Eric Schnapper, a lawyer for the family, argued that YouTube, a subsidiary of Google, bore responsibility because it had used algorithms to push Islamic State videos to interested viewers, using information that the company had collected about them.

“We are focusing on the recommendation function,” Mr. Schnapper said.

But Justice Clarence Thomas said that recommendations were vital to making internet platforms useful. “If you’re interested in cooking,” he said, “you don’t want thumbnails on light jazz.” He later added, “I see these as suggestions and not really recommendations.”

The federal law at issue in the case, Section 230 of the Communications Decency Act, also shields online platforms from lawsuits over their decisions to take content down. The case gives the justices the opportunity to narrow the scope of the shield and expose the platforms to lawsuits over whether they had steered people to posts that promote extremism, advocate violence, harm reputations and cause emotional distress.

Mr. Schnapper said YouTube should be liable for its algorithm, which he said systematically recommended videos inciting violence and supporting terrorism. The algorithm, he said, was YouTube’s speech and distinct from what users of the platform had posted.

Justice Kagan pressed Mr. Schnapper on the limits of his argument. Did he also take issue with the algorithms Facebook and Twitter use to generate people’s feeds? Or with search engines?

Mr. Schnapper said all of those could lose protection under some circumstances, a response that seemed to surprise Justice Kagan.

Justice Amy Coney Barrett asked about whether Twitter users could be sued for retweeting ISIS videos. Mr. Schnapper said the law at issue in the case might allow such a suit. “That’s content you created,” he said.

Section 230 was enacted in 1996, in the infancy of the internet. It was a reaction to a decision holding an online message board liable for what a user had posted because the service had engaged in some content moderation.

The provision said, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

The provision helped enable the rise of social networks like Facebook and Twitter by ensuring that the sites did not assume legal liability for every post.

Malcolm L. Stewart, a lawyer for the Biden administration, argued in support of the family in the case, Gonzalez v. Google, No. 21-1333. He said that successful lawsuits based on recommendations would be rare but that the immunity provided by Section 230 was generally unavailable.

Lisa S. Blatt, a lawyer for Google, said the provision gave the company complete protection from suits like the one brought by Ms. Gonzalez’s family. YouTube’s algorithms are a form of editorial curation like search engine results or Twitter feeds, she said. Without the ability to provide content of interest to users, she said, the internet would be a useless jumble.

“All publishing requires organization,” she said.

A ruling against Google, she said, would either force sites to take down any content that was remotely problematic or to allow all content no matter how vile. “You’d have ‘The Truman Show’ versus a horror show,” she said.

Justice Kagan asked Ms. Blatt if Section 230 would protect “a pro-ISIS” algorithm or one that promoted defamatory speech. Ms. Blatt said yes.

Section 230 has faced criticism across the political spectrum. Many liberals say it has shielded tech platforms from responsibility for disinformation, hate speech and violent content. Some conservatives say the provision has allowed the platforms to grow so powerful that they can effectively exclude voices on the right from the national conversation.

The justices will hear arguments in a related case on Wednesday, also arising from a terrorist attack. That case, Twitter v. Taamneh, No. 21-1496, was brought by the family of Nawras Alassaf, who was killed in a terrorist attack in Istanbul in 2017.

The question in that case is whether Twitter, Facebook and Google may be sued under the Antiterrorism Act of 1990, on the theory that they abetted terrorism by permitting the Islamic State to use their platforms. If the justices were to say no, the case against Google argued on Tuesday could be moot.

Whatever happens in the cases argued this week, both involving the interpretation of statutes, the court is very likely to agree to consider a looming First Amendment question arising from laws enacted in Florida and Texas: May states prevent large social media companies from removing posts based on the views they express?