Shield online platforms for content moderation to work

I believe the Indian government should introduce Good Samaritan protections in its new Intermediary Guidelines, prosecuting those who negligently allow violative content on their platforms. There is a need for a balanced approach that encourages responsible moderation without stifling free speech.

This article was first published in The Mint. You can read the original at this link.


Last week, President Donald Trump publicly reacted to the protests that followed the killing of George Floyd, with a tweet that ended with the words “… when the looting starts, the shooting starts.” Shortly thereafter, Twitter, for the first time in its history, decided to hide the Presidential tweet behind a warning label that said that his message glorified violence. This decision did not go down well with the Oval Office. Twitter had already fact-checked the President’s allegations of voter fraud through mail-in ballots and it seemed as if Twitter was purposely denying the President of the United States his right to free speech.

The White House swiftly issued an Executive Order, stating that [[social media]] companies had to be passive bulletin boards and could not actively restrict speech. If they were going to censor content they would be treated like content creators and made subject to the liabilities that content creators face. The Order went on to refer to Section 230(c) of the Communications Decency Act, 1996, from which intermediaries derive their immunity from prosecution, stating that the provision was not intended to give platforms the freedom to silence viewpoints they disliked.

Let me state upfront that I don’t believe this interpretation is entirely correct. While sub-section (1) of Section 230(c) does say intermediaries will not be liable for content posted by users, sub-section (2) was specifically designed to allow Good Samaritan moderation of online content. Even in the early days of the internet, it was clear that regulators would not be able to moderate content without the assistance of private platforms. Sub-section (2) was supposed to make this possible giving intermediaries immunity from liability for actions they took in good faith to restrict access to unlawful material. It was believed that with this immunity, internet platforms would have the incentive they needed to moderate the content that flowed through their pipes.

As a matter of fact, things did not exactly work out as intended. Despite the broad protection from liability that Section 230(c) gave them, most internet companies chose to rely on sub-section (1) of that section, setting themselves up to operate as passive publishers of content. In several instances, websites have used this publishers’ immunity to establish businesses that, for all intents and purposes, actively encourage the posting of unlawful content. As a result, instances of hate speech, cyber-bullying, defamation and abuse have proliferated online.

Around the world, the concept of intermediary liability has largely avoided invoking the Good Samaritan direction that the original law seemed to present. In India, Section 79 of the Information Technology Act, 2000 offers intermediaries immunity from liability if they have neither initiated nor interfered with the transmission of the message. Not only does the section make no mention of good faith moderation, it implies that tampering with the transmission of content would mean that immunity is no longer available.

Little wonder, therefore, that intermediary liability jurisprudence in India has moved in an entirely different direction. Rather than encouraging intermediaries to moderate content in good faith, the judgment in Shreya Singhal v. Union of India made it clear that internet companies had no obligation to take down content unless they were expressly instructed to do so by a court order. While this meant internet companies could no longer be arm-twisted to take down content, it offered no protection for take-downs of unlawful content in good faith.

The events of the past week make it clear that the notion of intermediary liability is about to undergo a re-think. The Executive Order called on the Federal Communications Commission in the United States to review the interaction between the various sub-sections of Section 230(c) with a view to ensuring that those engaging in censorship were not able to avail protections granted to publishers. In the meantime, the Indian Government is about to push through new Intermediary Guidelines that require internet companies to deploy artificial intelligence tools to identify and filter illegal content. In both instances, Good Samaritan protections for moderation in good faith seems to have been given a pass.

While a review of intermediary liability was perhaps unavoidable, I don’t believe the experience of the last two and a half decades is ground enough to discard the concept of Good Samaritan protections entirely. In a recent paper on Section 230 reform, Daniel Citron and Mary Anne Franks suggest that if we draft these provisions more explicitly we might be able to achieve a better result. For instance, rather than merely offering protection for Good Samaritan actions, the law should prosecute Bad Samaritans, targeting those who permit the publication of unlawful content for punishment. They also suggest imposing a reasonable standard of care so that we can reduce instances of abuse while allowing the internet to still flourish.

The Indian Government would do well to consider consider these suggestions in the new Intermediary Guidelines. After all, forcing intermediaries to use AI tools for moderation without giving them any good faith protections will not end well.