The Supreme Court is deciding the future of the internet, and it acted like it
“We’re a court. We really don’t know about these things. These are not the nine greatest experts on the internet.”
Supreme Court Justice Elena Kagan made the wryly self-deprecating comment early in oral arguments for Gonzalez v. Google, a potential landmark case covering Section 230 of the Communications Decency Act of 1996. The remark was a nod to many people’s worst fears about the case. Gonzalez could unwind core legal protections for the internet, and it will be decided by a court that’s shown an appetite for overturning legal precedent and reexamining long-standing speech law.
But during a remarkably entertaining session of questions today, the court took an unexpectedly measured look at Section 230. The outcome in Gonzalez is far from certain, but so far, the debate suggests a reassuring awareness by the court of how important the ruling will be — and the potential consequences of screwing it up.
Gonzalez v. Google covers a very specific type of online interaction with potentially huge implications. The suit stems from an Islamic State shooting in Paris that killed student Nohemi Gonzalez in 2015. Her surviving family argued that YouTube had recommended videos by terrorists and therefore violated laws against aiding and abetting foreign terrorist groups. While Section 230 typically protects sites from liability over user-generated content, the petition argues that YouTube created its own speech with its recommendations.
“Every time anyone looks at anything on the internet, there is an algorithm involved.”
Today’s hearing focused heavily on “thumbnails,” a term Gonzalez family attorney Eric Schnapper defined as a combination of a user-provided image and a YouTube-generated web address for the video. Several justices seemed dubious that creating a URL and a recommendation sorting system should strip sites of Section 230 protections, particularly because thumbnails didn’t play a major part in the original brief. Kagan and others asked whether the thumbnail problem would go away if YouTube simply renamed videos or provided screenshots, suggesting the argument was a confusing technicality.
The fine-line distinctions around Section 230 were a recurring theme in the hearing, and for good reason. Gonzalez targets “algorithmic” recommendations like the content that autoplays after a given YouTube video, but as Kagan pointed out, pretty much anything you see on the internet involves some kind of algorithm-based sorting. “This was a pre-algorithm statute, and everyone is trying their best to figure out how this statute applies,” Kagan said. “Every time anyone looks at anything on the internet, there is an algorithm involved.”
Introducing liability to these algorithms raises all kinds of hypothetical questions. Should Google be punished for returning search results that link to defamation or terrorist content, even if it’s responding to a direct search query for a false statement or a terrorist video? And conversely, is a hypothetical website in the clear if it writes an algorithm designed deliberately around being “in cahoots with ISIS,” as Justice Sonia Sotomayor put it? While it (somewhat surprisingly) didn’t come up in today’s arguments, at least one ruling has found that a site’s design can make it actively discriminatory, regardless of whether the result involves information filled out by users.
Getting the balance wrong here could make basic technical components of the internet — like search engines and URL generation — a legal minefield. There were a few skeptical remarks about fears of a Section 230-less web apocalypse being overblown, but the court repeatedly asked how changing the law’s boundaries would practically affect the internet and the businesses it supports.
The court sometimes seemed frustrated it had taken up the case at all
As legal writer Eric Goldman alludes to in a write-up of the hearing, justices sometimes seemed frustrated they’d taken up the Gonzalez case at all. There’s another hearing tomorrow for Twitter v. Taamneh, which also covers when companies are liable for allowing terrorists to use their platform, and Justice Amy Coney Barrett floated the possibility of using that case to rule that they simply aren’t — something that could let the court avoid touching Section 230 by making the questions around it moot. Justice Kavanaugh also mulled whether Congress, not the court, should be responsible for making any sweeping Section 230 changes.
That doesn’t put Google or the rest of the internet in the clear, though. Gonzalez almost certainly won’t be the last Section 230 case, and even if this case is dismissed, Google attorney Lisa Blatt faced questions about whether Section 230 is still serving one of its original purposes: encouraging sites to moderate effectively without the fear of being punished for it.
Blatt raised the specter of a world that’s either “Truman show or horror show” — in other words, where web services either remove anything remotely legally questionable or refuse to look at what’s on their site at all. But we don’t know how convincing that defense is, especially in nascent areas like artificial intelligence-powered search, which was raised repeatedly by Justice Neil Gorsuch as an indicator of platforms’ strange future. The Washington Post spoke with prominent Section 230 critic Mary Anne Franks, who expressed tentative hope that justices seemed open to changing the rule.
Still, the arguments today were a relief after the past year’s nightmare legal cycle. Even Justice Clarence Thomas, who’s written some spine-tinglingly ominous opinions about “Big Tech” and Section 230, spent most of his time wondering why YouTube should be punished for providing an algorithmic recommendation system that covered terrorist videos alongside ones about cute cats and “pilaf from Uzbekistan.” For now, that might be the best we can expect.
https://www.theverge.com/2023/2/21/23608949/supreme-court-section-230-gonzalez-google-youtube-algorithm-oral-arguments