Homo Privaticus

We have built our data protection laws on the edifice of consent. As a result, they are based on constructs that are derived from contractual frameworks. While this may have been acceptable in the early days of privacy law, the harms they need to protect against today are perhaps more effectively dealt with under tort law. We need to re-think consent.

This article was first published in The Mint. You can read the original at this link. If you would like to receive these articles in your inbox every week please consider subscribing by clicking on this link.


Economists assume that individuals act rationally, always responding to incentives in their own self-interest. They are assumed by many economic models to have complete information and the ability to perfectly calculate costs and benefits. This stereotype is what economists refer to as Homo Economicus - the abstract individual used in rational choice axioms and on which most economic models are constructed.

But, as Daniel Kahneman, Amos Tversky and Richard Thaler demonstrated, humans are not always rational in the way those economic models suggest or assume. They are driven by emotions, guided by intuition and influenced by bias. Which is why, more often than not, they do not act in real life the way economists predict in their models.

I’ve recently come to realise that privacy law has a stereotype of its own. One that is, in much the same way as Homo Economicus, also fatally flawed.

Data protection laws rely on consent. Data fiduciaries are required to only process personal data if they have been permitted to do so by the data principal, who has the autonomy to decide for herself just what can and cannot be done with her personal data. This is an approach borrowed from contract law and therefore proceeds under the assumption that consent is provided freely. This suggests that data protection law has a stereotype of its own—Homo Privaticus, if you will—for the ideal data principal who is fully capable of privacy self-management.

But the reality is somewhat different. More often than not, consent is sought from us in binary terms—in the form of take-it-or-leave-it privacy notices that we have no ability to negotiate. Refusing to provide consent is not really an option, considering that participation in modern society has come to depend on our ability to use online services that our consent will unlock. Even when we do have the ability to choose what can or cannot be done with our data, we do not have the data we need to make informed decisions on those choices. 

No one can evaluate all possible implications of providing consent in the manner sought or the impact that would eventually have on our privacy. Even if the data fiduciary has listed the full details of all it intends to do with our data in its stated privacy policy, these details are often phrased in ways that render the permissions being sought unknowable—even for the most technically savvy among us. This gets further complicated when personal data is transferred to other entities, as the risks we then have to evaluate are dependent on the future actions of yet-unknown persons.

The decisions we make are also far from rational. We are, more often than we realise, manipulated by dark patterns—design strategies that rely on human psychology to trick us into doing things we did not fully intend. These techniques purposely obfuscate the options available to us, leading us to forgo the protections that could have guarded our privacy. This is what Richard Thaler refers to as ‘sludge’—choice architecture that discourages us from acting in our best interests (as opposed to the more beneficial nudges that his Nobel prize-winning work is all about).

Modern data protection laws were framed using concepts borrowed from the traditional law of contract. To work, they rely on the assumption that consent is given freely and rationally, with full knowledge of all facts necessary to make an informed choice. In practice, Homo Privaticus rarely understands the consequences of what he has agreed to or the harms that could result based on the decisions he takes. The consent he provides is not consent at all.

India’s new data protection law also relies on the same consent framework. Data fiduciaries must obtain the consent of Indian data principals before they can process their personal data and provide users with the tools they need to manage their own privacy. Which means that India has also proceeded on the assumption that Homo Privaticus accurately represents the population of internet users in the country, despite the fact that this has not proven to be so in any other part of the world.

We need to re-imagine our approach to the protection of personal data. In 2017, I wrote a paper titled ‘Beyond Consent: A New Paradigm for Data Protection,’ in which I argued in favour of a brand new approach, one that held data fiduciaries responsible for the privacy harms they caused, regardless of whether or not they had obtained the consent of the data principal for their actions.

Ignacio Cofone makes the case for a similar approach in his recent book, The Privacy Fallacy: Harm and Power in the Information Economy. He points out that instead of holding data fiduciaries liable based on principles derived from contract law, we should instead use tort law to regulate privacy. Rather than holding data fiduciaries liable for any breach of contractual provisions, we should make them accountable for the data harms they cause.

While it might seem that India is already too committed to the consent-centric approach for this to make a difference, it will be up to the (still to be established) Data Protection Board to interpret just how the principles set out in the law are to be interpreted. If we can use tort law, as Cofone suggests, to complement the statutory liability that has been set out, we may at least partly be able to strengthen data protection in a manner that serves people’s privacy needs better.