Moderating with Moderation
Digital platforms face a number of challenges when it comes to content moderation, particularly when compared with traditional media’s editorial oversight. Most platforms adopt an “after-the-fact” approach to taking-down the content but they may be better off using algorithmic tools to dampen the virality of offensive content without infringing on free speech.
This article was first published in The Mint. You can read the original at this link.
A couple of months ago, Spotify CEO Daniel Elk found himself in a difficult spot. No sooner had he signed on Joe Rogan, a wildly popular podcast host (allegedly for over $100 million), than his employees were up in arms over transphobic comments made on his show. There was no question that the episode in question was offensive to the LGBTQI community - but Elk was worried about the free speech implications of censoring this content despite its offensive nature.
Review
In the pre-digital world content was only ever distributed by media companies that reviewed every last word before it was made available to the public. Because they were liable for what they wrote, these organisations employed large editorial teams to balance the need to report news against considerations of accuracy, decency and the law.
Digital platforms, on the other hand have never had to worry about oversight. From the very early days of their existence, they were shielded from prosecution by protection from intermediary liability. I have previously written about how what was originally intended to offer good samaritan protection for good faith moderation of inappropriate content unfortunately metamorphosed into a general exemption to internet companies from all liability:
Despite the broad protection from liability that Section 230(c) gave them, most internet companies chose to rely on sub-section (1) of that section, setting themselves up to operate as passive publishers of content. In several instances, websites have used this publishers’ immunity to establish businesses that, for all intents and purposes, actively encouraged the posting of unlawful content. As a result, instances of hate speech, cyber-bullying, defamation and abuse have proliferated online.
As a result digital platforms ignored the moderation ritual that traditional media companies fussed over, focussing instead on ensuring that content flowed as smoothly as possible from producers to consumers. They only took down content after the fact - that too only if someone complained.
As digital became mainstream, the unfiltered anarchy that these platforms had spawned began to reveal its dark side. The content we were getting exposed to was more offensive and deeply divisive than anything we’d experienced before. It soon became clear that the lack of moderation in the digital environment was bringing out the worst in people. And giving the worst sorts of people a stage they would never otherwise have had.
Oversight
Facebook’s response to this has been to establish an independent Oversight Board to which appeals against the decisions of Facebook’s moderation teams can be preferred. This approach keeps in place the tiered (algorithmic + human) content moderation systems that Facebook currently employs but adds on a layer of redress to deal with edge cases. Recently Facebook announced that at long last, its Oversight Board was finally ready to hear appeals. It will be interesting to watch how this new judicial framework for digital platforms evolves and doubtless I will be writing more about it in the months to come.
But as interesting as this is as a governance solution for the internet age, it is not without its shortcomings. In the first place, the Oversight Board only comes into play after the fact. By the time a decision is referred to it, various parties will have suffered as a consequence of Facebook’s moderation decision. Secondly, the Board is limited by the number of cases that it can actually review - regardless how many appeals it manages to hear that will still only be a tiny fraction of all the moderation decisions that parties are dissatisfied with. Finally, as much as the Board has been selected with a view to being regionally representative, it is impossible for an essentially international body interpreting a set of community standards to properly address to local concerns.
Exceptionalism
This, to my mind, is the heart of the problem. Traditional media companies have always respected the inherent diversity in global values and legal norms by developing regional strategies for distribution. Books were simply not shipped to countries in which they were banned. Films abided by the decisions of censor boards in each of the countries in which they were distributed - making specific cuts that were required by local regulators before they were shown in local theatres and television stations.
Digital platforms, emboldened by internet exceptionalism, have simply ignored these variances, attempting to uniformly apply community standards to all their moderation decisions. Granted these standards are based on the liberal values to which all modern democracies aspire, but even so platforms have struggled to strike a balance between taking down offensive content and protecting their users’ rights to free speech.
This is the quandry that Daniel Elk found himself in when faced with an internal revolt over Joe Rogan’s unabashed transphobia. It is what users face whenever they try to get social media companies to take down offensive content.
Solutions
Most of us believe that it should be easy for platforms to implement more effective technical solutions. After all they have demonstrated that their algorithms can deliver content that is narrowly targeted to the users who would most appreciate it. If they can do this with such fine-grained precision surely they can infer what content will be offensive to whom and take it down before it does any damage.
When we look for solutions we tend to think in absolute terms. We want offensive content to be completely expunged from digital platforms so that it has no chance of infecting minds with its bile. It is impossible to implement an absolute solution like this without curtailing the rights of those who posted it. And while all speech is subject to reasonable restriction, I would argue that digital platforms neither have the ability nor the legal authority to determine where that line should be drawn.
What if we can take a less binary approach. Social media companies have powerful amplification tools that they use to promote popular content. What if we insist that they use these tools in reverse - so that instead of amplifying provocative content they dampen its virality. Rather than promoting it get them to use their tools to keep offensive content from trending. So long as they don’t take down content they can’t be accused of violating freedom of speech.
And as we all know content that is hard to find today might as well not exist.