In the early internet era, websites were liable for third-party content, leading to legal challenges. US Section 230 was introduced, protecting online platforms from being treated as publishers for user-generated content. However, in Gonzales vs. Google, YouTube’s recommendation algorithms are under scrutiny, potentially redefining Section 230’s protections. The decision could reshape online content moderation globally.

This article was first published in The Mint. You can read the original at this link.

In 1995, when the World Wide Web was still in its infancy, the pioneers of that nascent industry were seen as no different from publishers. The websites they ran were treated like magazines to which writers could contribute articles. And just as magazine publishers could be sued for what their authors wrote, websites had to be accountable for what users posted.

A series of decisions in US courts sharply underscored the exposure of this brand-new industry to third-party content. In Compuserv v. Cubby, the court laid down a 2-step test to hold online services liable for third-party content. It said that a website would only be immune from prosecution if it had: (i) no editorial control over the content; and (ii) no reason to know that that content was objectionable. Stratton Oakmont v. Prodigy extended this by declaring that any website that moderated user-generated content would not be entitled to immunity—a ruling which, somewhat perversely, punished companies that were taking the trouble to remove inappropriate content from the web.

The Twenty-Six Words

Realising the absurdity of this outcome, US Senators Cox and Wyden set out to enact a law that would restore the much-needed protection of internet companies. They inserted into the Communications Decency Act, a new Section 230 that stated: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Since this was a small part of a far more controversial legislation, it flew completely under the radar, with most of legislators who had approved it, unaware of the profound impact it would have on how the internet would be governed.

In his book The Twenty-Six Words That Created the Internet, Jeff Kosseff argues that it is precisely because of the freedom of speech guaranteed by these 26 words, that the internet, as we know it today, has come into existence. US courts have consistently interpreted Section 230 in a way that encourages businesses to be unafraid of third-party content, allowing them to innovate fearlessly. This is why so many successful internet businesses are based in the US, where they can flourish without having to constantly look over their shoulder.

Challenge to Section 230

This week the US Supreme Court started hearing oral arguments in Gonzales vs. Google, a case brought against YouTube for not only making ISIS videos available on the website, but actively disseminating them through its recommendation algorithms—placing paid advertisements in proximity to ISIS-created content and allegedly sharing this revenue with ISIS.

In order to win, the petitioners will have to argue that YouTube is not entitled to immunity under Section 230. To do that, they are looking to distinguish between “recommended content” on one hand and the “recommendations” that YouTube makes on the other. Recommended content uploaded by the user would fall within the definition of “information provided by another information content provider”, which is entitled to protection under Section 230. However, the algorithmic recommendations that YouTube makes, the petitioners argue, are akin to placing a message in big bold letters next to a video stating, “You should watch this.” Since the message is not “information provided by another information content provider”, it cannot be entitled to the Section 230 exemption.

This case has become something of a lighting rod among conservative politicians, most of whom believe that tech companies have grown too powerful. They have long complained about the censorship carried out by these entities, arguing that their moderation decisions have more to do with political ideology than freedom of speech. To them, this case is an opportunity to bring Big Tech companies under control, and, given that at least one of the judges in the conservative-majority court has publicly remarked on questionable precedents established over the years in relation to Section 230, they are hopeful of a favourable outcome.

If the plaintiffs succeed in convincing the court to apply a narrower interpretation of Section 230 than is in current used, the legal exposure of large tech platforms will increase so significantly that they will be forced to radically change the ways in which they operate. Online intermediaries will be forced to filter all the speech that appears on their sites—to the point where some may choose not to host user-generated content at all.

Impact on Moderation

There is no doubt that this will change the way in which content moderation takes place on the internet. In many other countries, internet businesses are already required to expeditiously remove content that is allegedly defamatory or illegal. In some instances, they have to proactively screen posts to ensure that harmful third-party content does not even appear online. So far, the approach that these countries have taken to content moderation has been looked down upon as inappropriate for the internet age. If, however, this case ends up being decided in favour of the petitioners, the US could go down a similar path.

Among the many number of amicus briefs that have been filed before the court, there is one by Cox and Wyden, the original authors of the American law, urging the court to retain the protections guaranteed under the law they drafted.

The US Supreme Court is expected to issue its opinion this summer, and no matter what it rules, it will be significant.

And the whole world will be watching.