Content Moderation

The Loss of Immunity

While the world has been agog with the news of the arrest of the CEO of Telegram followed in quick succession by the decision to boot X (formerly Twitter) out of Brazil, a decision in the US is likely to have a much more far-reaching impact on how content is delivered online.

Moderating Systems

Content moderation challenges arise from the vast volume of online content and diverse user beliefs. Current moderation uses automated tools and human moderators, but both have flaws. Evelyn Douek suggests a “systems thinking” approach, focusing on systemic solutions rather than individual errors. As India drafts the Digital India Act, a shift towards addressing systemic issues in content moderation is essential.

Dis-Content

In the early internet era, websites were liable for third-party content, leading to legal challenges. US Section 230 was introduced, protecting online platforms from being treated as publishers for user-generated content. However, in Gonzales vs. Google, YouTube’s recommendation algorithms are under scrutiny, potentially redefining Section 230’s protections. The decision could reshape online content moderation globally.

Appealing Moderation

The draft amendments to the Information Technology Rules, 2021, will require intermediaries to align community standards with Indian law and create a Grievance Appellate Committee for “problematic content.” Critics view this as a tool of government censorship, while others see a need for balance between government control and private enterprise.

Backfire

Any attempt to change the beliefs of vaccine skeptics using facts is bound to fail. Thanks to the backfire effect they will take the facts presented to them and bend them to fit with their beliefs rather than allow new facts to convince them that their beliefs were wrong.

Intermediaries Liable

The Intermediary Guidelines 2021 just make things more confused. It will cover social media intermediaries on the basis of registered users and not active users which can be a big deal for many companies that have a relatively small active Indian user base. It will apply to services that provide messaging as just an ancillary service - and most digital platforms do that. Voluntary verification is also a strange requirement since it doesn’t ensure traceability because its not mandatory. Anyone who fails to comply even with the most minor requirements of the regulations will lose their intermediary liability protections.

Gatekeepers at the Edge

We gave internet companies immunity for the content the flows through their pipes because communication infrastructure should have no opinion on the content. This, however does not solve the problem of offensive content - at best it passes the buck. We need a framework for determining what is acceptable speech. Governments should develop prohibited content dashboards so that internet companies can understand clearly what content is permissible and what is not.

Moderating with Moderation

Digital platforms face a number of challenges when it comes to content moderation, particularly when compared with traditional media’s editorial oversight. Most platforms adopt an “after-the-fact” approach to taking-down the content but they may be better off using algorithmic tools to dampen the virality of offensive content without infringing on free speech.

Shield online platforms for content moderation to work

I believe the Indian government should introduce Good Samaritan protections in its new Intermediary Guidelines, prosecuting those who negligently allow violative content on their platforms. There is a need for a balanced approach that encourages responsible moderation without stifling free speech.

The value of scepticism in the age of deep-fake videos

With the rise of hyper-realistic deepfakes, discerning truth becomes harder. We need to learn to be more skeptical of the content we receive and constantly question its authenticity. Its not hard to do as we have done this before.