Privacy Self-Management

When there were limited uses to which data could be put, it was easy to evaluate the harms that could result from providing consent. Things are much more complex today so data protection regulations have tried to improve the quality of consent. This has resulted in the transparency paradox. If we can adopt consent templates we can give users appropriate autonomy.

This article was first published in The Mint. You can read the original at this link.


I ended last week’s article with a quote from Byrne Hobart that called the act of communication a controlled violation of privacy. I thought this was an unusual perspective on the nature of personal communication, but it wasn’t until further reflection that I realized that if it were indeed true, it must mean that all social interactions are an exercise in trying to control the inadvertent disclosure of our private thoughts and beliefs.

To be clear, this is a relatively recent problem. When all our social engagements took place solely in the physical realm, even if our interactions revealed insights into our personal traits, they were only observable by the limited few with whom we engaged. Moreover, thanks to the fallibility of human memory, any inadvertent disclosures tended to be easily forgotten. Which is why we acted with less restraint, secure in the knowledge that the threat to privacy was minimal at best.

Control over Data

Now that almost all our social engagements are online, our interactions play out on a larger stage where nothing we say or upload is ever forgotten. Any hope of retaining anything close to the level of personal privacy we once enjoyed depends entirely on our ability to control what is done with our data. Which is why modern privacy law is so narrowly focused on ensuring that data subjects are able to retain control over their personal data.

In the early days of data protection, all one needed for such control to be exercised was to make sure that personal data was not used without consent. In those days, there were limited uses to which data could be put and it was easy to evaluate the possible harms that could result from providing consent. With the advent of big data, things have become much more complicated. The number of parties involved in the collection, dissemination and use of personal data has exploded, as have the present and future uses to which it can be put. This means that consent is no longer as effective as it once was as a mechanism of control.

Most modern data protection regulations try and solve this problem by improving the quality of consent. They attempt to improve privacy decision-making by requiring data collectors to provide full details of the various uses to which data could be put, so that consent is only obtained after all relevant information has been considered.

Unfortunately, given the sheer scale at which data is used today, by the time each of the multiple entities to which data could be transmitted has been listed along with all details of the complicated multi-party data flows that would involve, our privacy policies were becoming so long and complex as to be effectively unintelligible. Realizing that notice requirements were creating impossibly complex privacy documents, regulators pivoted yet again, requiring that all details required under the law—what the data could be used for and the entities with which it will be shared —be presented in simple terms that laypersons could easily understand.

Transparency Paradox

Helen Nissenbaum refers to these internally inconsistent objectives as ‘the transparency paradox’. Modern data businesses are so complex that it is simply not possible to anticipate all the consequences that could result from their data flows unless users are provided exhaustive information about the many ways in which they process data. Yet, when all that detail is transparently presented in a privacy policy, it results in documentation so long and complicated as to be beyond the capacity of even the most sophisticated among us to comprehend.

Many have argued that it is precisely because of the transparency paradox, and other similar failings of consent, that we need to come up with an alternative to our current ‘self-management’ approach to data protection. They argue for a more paternalistic approach to data protection, suggesting that since we cannot effectively make these decisions for ourselves, the regulator should intervene on our behalf, stipulating what data businesses should or should not do with our data.

In my view, this approach swings too far to the other extreme. Even if we are incapable of fully appreciating all the many ways in which data processing could affect our personal privacy, completely relinquishing all control over these decisions to a regulator denies us any choice whatsoever in the matter.

Middle Ground

As is often the case, the most effective solution would be to find a middle ground between those two extremes. People want to have some control over data decisions without having to micromanage every last aspect of it. This means that while regulation should define broad substantive rules around the collection and use of data, it should, at the same time, leave space for users to negotiate their own personal relationships with data businesses that reflects their own individual approach to privacy.

One way to achieve this would be to implement consent templates. These could be broad sector-specific frameworks that regulators can use to prescribe the contours of permissible data collection, processing and use. This will offer users the assurance that someone is watching out for their best interests in terms of the privacy consequences of providing consent.

When coupled with granular consent implemented using technological tools that allow users to choose what can be done with their data (with full knowledge of all the trade-offs implicit in each granular decision), users will also be able to retain an appropriate level of autonomy over their data decisions.