Engineering for Outcomes

Our data regulations still prescribe processes that must be followed to bring about the scarcity we have long believed will ensure data protection. Rather than prescribing processes, we should focus on engineering for the outcomes we want to achieve.
This article first appeared in the Mint. You can read the original here. If you would like to receive these articles in your inbox every week please consider subscribing by clicking on this link.
I have long argued that modern technologies can only be effective if governed by principle-based legislation. Prescriptive rules tend to be sclerotic, calcifying faster than the technology systems they seek to regulate. What we need instead are broad, durable principles that describe the outcomes we need, rather than the processes by which they are achieved. This ensures that the law’s objectives remain valid even after the technology it governs has evolved in a direction that no one could have anticipated.
The need for well-designed principles-based regulation is particularly acute in the era of artificial intelligence. If there is one thing that is predictable about these computational systems, it is that they will evolve in unpredictable directions. This suggests that the only way to effectively regulate them would be to define the principles according to which they must operate.
But what should those principles be?
We Regulate for Scarcity
AI systems get better the more data they are trained on. Their performance is directly correlated with the volume of their training data: the more data they ingest, the more useful they are. Today’s leading AI models have achieved their current levels of excellence only because they were created by processing volumes of data that would have been unimaginable even a decade ago. It is fair to say that our world today is defined by data abundance—and by computational models optimised to extract insights from such abundance.
However, the data governance regulations that apply to us ignore this fundamental feature of modern AI systems. They are still defined by ideas first developed in the 1970s. Even today, data protection laws operate on the assumption that the best way to protect the privacy of individuals is to minimize the data available about them. This is why we have designed them to maximize data scarcity by ensuring that organizations collect as little data as possible and delete it as soon as practicable after it has been used. This is why the principles of data minimization and retention restriction remain the bedrock of privacy protection even today, nearly half a century after they were first devised.
To be clear, even though these restrictions have been in place for so long, they have failed in their intent to stem the flow of data or effectively curb its use. They are, as a result, little more than normative fiction: i.e. aspirational values experienced more in the breach than observance. Organizations have learnt to navigate around them by seeking consent in terms so broad that once obtained, they have the legal authority to collect vast amounts of data, retain them for extended periods, transfer them through complex supply chains and freely use them for purposes that were not even contemplated when consent was sought. As a result, social media, e-commerce and a whole host of other online companies often know more about us today than we do ourselves.
Outcomes Not Processes
When it was enacted in 2023, India’s Digital Personal Data Protection Act was hailed as being a principles-based law—not just because of how radically it had been simplified in comparison with the more prescriptive drafts that preceded it, but also because of how much better it was when compared to global data protection frameworks elsewhere, such as the EU’s General Data Protection Regulation. It felt as if the time it had taken for us to finally come up with a new law was well worth the wait, as it had allowed us to learn from the mistakes that others had made and strike a balance between over-regulation and giving our enterprises the space they need to function.
Simplification, however, is necessary but not sufficient. Moving from prescriptive rules to a principles-based frame only works if the principles you select regulate outcomes, not processes. This is not what the Indian law has done. Instead of focusing on ‘what’ regulation needs to achieve, we have chosen to specify ‘how’ compliance must be implemented.
For instance, by requiring data minimisation and retention restriction, we force organisations to engineer for data scarcity as a means of ensuring personal privacy. Apart from how difficult this has proven to be in practice, in a world that increasingly stands to benefit from data abundance, a data protection law designed to create conditions of optimum scarcity risks denying us the valuable and socially beneficial outcomes that AI systems can offer.
Accountability
What we should have done instead is regulate outcomes. Rather than telling data fiduciaries how to process data, we should have told them that we will hold them accountable for the harms that result from the actions they perform. Rather than specifying the steps they need to take, we should have designed the law to assess, in real time, exactly what their algorithms do, so that when harm occurs, it can be detected early enough to be mitigated.
Designing our regulatory frameworks in this manner will ensure that the governance framework we implement is outcome-oriented, technologically agnostic and remains relevant, notwithstanding the unpredictable directions in which our information systems may evolve. While the principles of data minimization and retention restriction offer comfort born of familiarity, they are hindering the development of governance frameworks that modern AI systems require.
In the age of data abundance, we cannot use rules designed for scarcity.
