Gatekeepers at the Edge

We gave internet companies immunity for the content the flows through their pipes because communication infrastructure should have no opinion on the content. This, however does not solve the problem of offensive content - at best it passes the buck. We need a framework for determining what is acceptable speech. Governments should develop prohibited content dashboards so that internet companies can understand clearly what content is permissible and what is not.

This article was first published in The Mint. You can read the original at this link.


The realisation that the President of the United States might have been directly responsible for the assault on the US Capitol prompted all major social media platforms to terminate his accounts for fear that a milder course of action might have resulted in further incitements of violence. Never before have private companies acted to muzzle the ability of the leader of one of the most powerful countries in the world to communicate. Then again, at no previous time in history has it been possible for private entities to do so.

Data Blindness

The internet is the most efficient data communication network we have ever built. For the most part, this is because its transport layer is designed to be data blind—responsible for transporting data packets without knowing what they contain. The closer you get to the edges of the internet, this blindness dissipates, and, because internet platforms are often aware of the content they host, they could be held liable for any offensive or illegal user-generated content found on their platforms.

It was in order to protect the fledgling internet industry from this liability that the US Government enacted Section 230 of the Communications Decency Act, granting internet intermediaries immunity from content liability. I had written previously about Section 230 of the Communications Decency Act and in particular the Good Samaritan protection written into sub-section (2):

Even in the early days of the internet, it was clear that regulators would not be able to moderate content without the assistance of private platforms. Sub-section (2) was supposed to make this possible by giving intermediaries immunity from liability for actions they took in good faith to restrict access to unlawful material. It was believed that with this immunity, internet platforms would have the assurance they needed to moderate the content that flowed through their pipes.

As it happened, few companies chose to go down that route preferring, instead, to rely on the absolute protection available to them under the provisions of sub-section (1) that offered by operation of law to create a presumption of blindness for businesses at the edge, replicating through legal fiction the design of the pipes at its core.

This legal presumption of blindness was made conditional on the fact that, just like the pipes at the core of the internet, companies at the edge had to refrain from interacting with the content being shared on their platform—serving them up as is, without any moderation whatsoever. This is why internet companies have historically been reluctant to take down content, preferring to be directed to do so by a court, rather than taking these decisions on their own.

Gatekeepers at the Edge

But, as the internet evolved, a few large companies at the edge of the internet became gateways to our interactions online, functioning as funnels for user interaction. With the rise in user numbers, it was no longer feasible for these platforms to operate as dumb pipes, serving up content without heed to what it contained. As their international footprint grew and regional variations of law and convention forced them to contend with different requirements in every new country they expanded into, this problem was only exacerbated.

It soon became apparent that they had no option but to function as gatekeepers for user behaviour. They were forced, as a result, to develop more and more detailed codes of conduct, setting out how users were expected to behave on their platforms, even though by doing so they risked losing their immunity from prosecution.

These codes of conduct are the basis on which the internet functions today. A failure to comply with them can result in suspension from these platforms, and given how important these platforms are to our day-to-day interactions, the threat of being cut off has had a powerful coercive effect. Every now and then, these provisions have been invoked to suspend (or even expel) users whose behaviour has been egregious. Unlike in the early days of the internet, when online companies did all they could to avoid playing this role, today it is the platforms at the edge that are determining what is right or wrong—and doing so on the basis of the values and principles enshrined in their codes of conduct.

Until last week, it was not entirely clear exactly how far this ability to control user behaviour would be taken. Despite the power that they wield, social media companies have always exercised restraint, particularly when it came to censoring the speech of persons with political influence. But following the assault on the US Capitol, every social media company independently came to the conclusion that a line had been crossed, and that the access of the President of the United States to online audiences needed to be curtailed.

Passing the Buck

While much of the commentary since then has been about how social media companies have too much power, I believe the question we should really be asking ourselves is why this came to pass. When we gave companies at the edge of the internet immunity from liability, we did so because we believed that communication infrastructure should have no opinion on the content that it carried. But merely granting immunity from prosecution does not solve the problem of offensive content. At best, it passes the buck — and there was no-one to receive it.

What was needed was a framework for determining acceptable speech—a framework that we should have rolled out at the same time that we extended to internet intermediaries immunity from prosecution. This framework could have been designed for the internet, giving online companies the ability to bake these restrictions directly into the tools and filters they use to automatically regulate content. Had we done this, it would have given us an appropriate counterpoint to intermediary liability protection—and would have saved us from the situation we have found ourselves in.

It is not too late to remedy this. Even now, governments can take it upon themselves to develop prohibited content dashboards that provide internet companies clear directions on what sort of content is allowed and what is not. Not only will that provide clarity as to what is permitted, it will properly vest responsibility where it should lie.