Consent to Port

The Data Empowerment and Protection Architecture framework allows data companies to collect consent for data portability just before transfer, simplifying privacy policies and focusing them on data collection and use. This unbundling of consent improves user control over personal data and enhances privacy.

This article was first published in The Mint. You can read the original at this link.


A good part of my public writing on privacy, in the pages of this column and elsewhere, has centred around my discomfort with our over-reliance on consent as the primary means to protect our personal privacy. As useful as consent might have been in the early days of data protection, I believe it no longer performs that function effectively. Instead, it is used today as a get-out-of-jail-free card by technology companies who design their privacy policies to be as wide as possible so that when you accept their terms you will have agreed to language that gets them off the hook for much of what they do with your data.

The Problem of Agency

The reason for this is the data asymmetry that is inherent in our online interactions today. Large tech companies have such disproportionate access to, and control over, our personal data that they often, I have argued, know more about the implications of our decisions than we can ever know ourselves.

We are neither technically equipped to fully understand the consequences of the consent we provide nor can we expect to have all the information we need to make inf­ormed decisions about our privacy. In other words, any autonomy we might think we have in determining the boundaries of our personal privacy is nominal at best.

One way to get more agency over our data would be to insist that our consent is separately obtained each time our data is put to use in a new way. This would make consent more meaningful since it is provided directly in the context of the proximate use. Having said that, there are few things more irritating than having to repeatedly accept the terms of a frequently revised privacy policy. This is ostensibly why data companies consolidate the consent they need in the broadest terms so that, rather than seeking fresh consent every time they find a new purpose to which the data needs to be applied, they can accommodate these new uses within the terms of consent they previously obtained.

Data Transfers

Take transfers, for instance. Data protection laws require the collectors of data to clearly specify the individuals and entities with whom the data will be shared. However, in most instances, the number of entities to which data will be shared is so large that it is inconvenient to list them in anything but the broadest terms. As a result the data sharing provisions in most privacy policies are worded broadly, listing the entities to which data could be transferred in categories as vague as “advertisers”, “vendors” and “researchers”. When we consent to the terms of these privacy policies, we permit our data to be shared with any entity that falls within those broad descriptions and forsake the opportunity to decide, on a more granular basis, which a specific advertiser or researcher can have access to our data. Instead, we effectively delegate the responsibility of deciding this to the company that controls our data.

This is one of the primary reasons why I believe that consent as currently used is inadequate. When obtained up-front it is, by definition, collected without reference to a specific entity or purpose for which it is used. It is therefore unmoored from the actual transaction that it is supposed to sanction. How can consent provided in this manner be valid, given that the data principal had no idea about the actual transaction to which the consent was eventually applied.

But as much as I might wish for greater agency in data transfers, there are practical constraints that come in the way. To be effective, cross platform data transfers need a common portability infrastructure that can be used by the entire ecosystem for data transfers so that every entity in the ecosystem can send and receive data. It must be designed to be fair and non-discriminatory in a manner that does not prioritise large established companies over startups. This calls for a level of standardisation currently absent across most sectors and calls for the creation of a common technical infrastructure that will enable such sharing.

DEPA

Last week Niti Ayog released a draft document for discussion on the Data Empowerment and Protection Architecture (DEPA), India’s technology infrastructure for secure data sharing. It described the unique technological and regulatory framework that India has created to facilitate the transfer of data between various financial institutions using digital consent.

I was fortunate enough to be present at the launch of DEPA a few years ago and wrote, at the time, of the ways in which such a framework might be deployed:

There are many use cases in which I can see the benefit of having such a data request framework. It can be used in microfinance and alternative lending to allow borrowers who might otherwise have been ineligible for a loan to present proxies for their credit-worthiness—such as their GST returns or history of mobile payments—to prove to lenders that they have the ability to service the loan. More importantly perhaps, this framework will finally allow patients to free their personal medical data from the silos in which they are currently trapped so that they can analyse their own medical history to get a better assessment of their personal health.

Three years on, it is heartening to see how much of this has come to be. DEPA is being used in multiple sectors from lending to healthcare. Most recently this has translated into the establishment of the Open Credit Enablement Network (OCEN), a set of digital lending APIs for borrowers and lenders. Once widely adopted, OCEN will offer unprecedented new opportunities for fintech companies and traditional banks to offer products and services that would otherwise not have been possible.

But what I did not fully realise at the time was the DEPA offered another unseen advantage. By creating a technological framework within which data principals can provide just-in-time consent for requests to port their data, DEPA gives us the tools to radically alter the very nature of consent itself. Since it offers us a way in which consent can be provided for a specific purpose and proximate in point in time to the transfer, it allows us to offer data principals an opportunity to better appreciate the implications of their consent immediately before they provide it. Consent provided in this manner is a far more effective than that provided up-front for transfers to be made in the dim and distant future.

Once it becomes possible for data companies to collect consent to port data just before its transfer, they will be able to unbundle consent into that which is required to sign up to a service and that which needs to be obtained in order to port the data to a third party. Since the latter no longer needs to be procured up-front, our privacy policies will no longer need to include details of who data might be transferred to. This will result in simpler privacy policies focused on the purpose for which data is collected and the uses to which the data fiduciary can, itself put it to. If there is any need to transfer the data at a later point in time, such transfer can proceed on the basis of a consent to port sought immediately prior to the transfer.

This is a powerful solution. By uncoupling consent to port from sign-on consent we can significantly improve our effective control over our personal data. It makes the privacy compact that we enter into with those who collect our data more understandable and gives us greater, more effective agency in the decisions we make with regard to the transfer of our data. From a privacy perspective, this is the real benefit of the DEPA framework.