The trade-off between privacy and content traceability
The use of end-to-end encryption is essential for privacy but also poses challenges, as it can be misused for criminal activities, such as the distribution of Child Sexual Abuse Imagery (CSAI). The paper presented at the Web Conference 2019 highlights the exponential growth in CSAI, correlating it with technological advancements. The dilemma lies in finding a balance between protecting civil liberties through encryption and preventing its exploitation for criminal purposes.
This article was first published in The Mint. You can read the original at this link.
The ability to have private conversations is fundamental to our modern conception of privacy. I have argued time and again in this column that in our current technological and political context, it is critical that we mandate the use of robust end-to-end encryption to ensure that private conversations are secure from eavesdropping by governments and private corporations alike.
However, no discussion on encryption is complete unless we also speak about how these systems get misused. As much as encrypted messaging is necessary for investigative journalists, whistle-blowers, and abuse helplines, it is tremendously attractive to criminals looking to take advantage of the fact that messages sent over these networks cannot be traced back to them. As a result, these platforms find themselves being used for criminal activities. Of particular concern is the way in which these networks are used to distribute imagery related to children.
In a paper presented at the Web Conference 2019 in San Francisco, it was suggested that the exponential increase in the proliferation of Child Sexual Abuse Imagery (CSAI) on the internet over the last few years is probably directly correlated to the growth of online sharing platforms. Using data from the National Center for Missing and Exploited Children (NCMEC) in the US, an organization that tracks all CSAI content detected by public and online sharing platforms, the paper reported a median growth in reported CSAI of as much as 51% year-over-year. In the first 10 years of its operation, the NCMEC only received 565,000 reports, while in 2017 alone it received over 9.6 million.
This in itself is a cause for concern. However, what is particularly worrying is the increasing globalization of the problem. Ten years ago, 70% of all CSAI that was reported related to abuse committed in the US. Today, 68% of reports relate to abuse in Asia, 19% to abuse in the Americas, 6% in Europe, and 7% in Africa. India, Indonesia and Thailand account for 37% of all reported CSAI, with India leading the list.
The paper suggests that this increase is a direct consequence of improvements in technology, including smartphones, high bandwidth internet connectivity, low-cost cloud storage, and the plethora of applications we can choose from today for internet messaging. If you consider video content alone, it would not be an exaggeration to suggest that the extraordinary growth in CSAI videos from under 1,000 reports a month in 2013 to more than 2 million reports per month in 2017 is almost entirely on account of the proliferation of smartphones that can, at the press of a button, record a high-definition video and directly upload it onto the internet. As much as 84% of CSAI images and 91% of videos have only ever been reported once, which suggests that there is a prodigious amount of new content that is constantly being created.
To their credit, technology companies have been working on this issue, but so far their efforts have simply not been able to keep pace with the problem. Most companies have large teams of human reviewers who scan through the hundreds of millions of user generated images to identify CSAI. Flagged content is then uploaded to a PhotoDNA technology tool which generates a fingerprint of the image that lets it be identified even if the original image has been transformed to avoid detection. However, this is not enough. PhotoDNA can only detect images that have already been flagged during a manual review. It cannot itself identify new content. Given the volumes of original CSAI being created, it is impossible to have human reviewers to manually flag all the content going online. What is needed is a technological solution, and the paper suggests a number of new techniques that can be used, including scene clustering and facial clustering, which will automatically flag content without any need of human intervention.
What the paper does not specifically address is the fact that as so much of this content is shared over secure messaging networks, the encryption deployed by these applications will effectively stymie their own ability to review the content flowing through them. This means that no matter how good the technology we build to automatically detect CSAI content might be, it will be useless if the content is shared through these messaging systems.
Of the many trade-offs we have to make in the arena of technology policy, this, to me, is probably the hardest. I have long been a strong votary of the need to build encrypted networks to protect essential civil liberties. However, I struggle to reconcile that position with the knowledge that once built, these networks will invariably get used for the most vile of criminal activities. From the statistics in the paper cited earlier, it now appears that the very existence of these networks and the immunity from traceability that they provide have actively encouraged the proliferation of heinous crimes against innocent children.
I am not sure that we will ever be able to devise a truly effective solution that strikes the appropriate balance between these two concerns. As much as policy issues are never binary, technology unfortunately is, and opening even the tiniest of backdoors to allow distributors of CSAI to be tracked down will destroy the protection that encryption offers us all.