A Look Back at 2024

My last column of the year has traditionally been a review of technology policy developments of the year gone by. Even though I had hoped to see the data protection law come into force, at the time of writing it still has not. And so, instead of privacy, the year was dominated by developments in DPI and AI.

This is a link-enhanced version of an article that first appeared in the Mint. You can read the original here.


Without a doubt, the biggest disappointment of 2024 has to be the fact that India’s data protection law is still not in force despite having managed to get through both houses of Parliament in record time in 2023. The rules that were supposed to be issued this year still haven’t seen the light of day.

In anticipation of the new law coming into effect, I discussed over the course of the year several issues that still needed to be addressed. In particular, I called for a sensible approach to age-gating—even suggesting zero-knowledge proof tokens as a way out. But none of this will make a difference until the law is brought into force. After all, this is just the first step. We still need to establish a data protection board, put in place mechanisms for audit, cross-border data transfers and the like.

DPI Globalisation

On the other hand, what is most gratifying is the pace at which India’s digital public infrastructure (DPI) approach gained acceptance around the world in the span of one short year. I was worried that this might not be the case, given the many concerns that were being raised about the DPI approach. But I should not have been. Thanks to the work of many, DPI adoption scaled up around the world, as evidenced by outcomes of the DPI Global Summit, the Quad statement in Delaware and the UN’s DPI Safeguards Initiative.

On my recent visit to Brazil, I had an opportunity to witness in person the extent to which the DPI message had percolated into the global agenda on digital governance (and see for myself the DPI for climate change that Brazil had built). DPI is here to stay and 2025 will be the year in which we will see DPI projects come to fruition all over the world.

Artificial Intelligence

But, without a doubt, the topic that consumed the most column (and mind) space this year was artificial intelligence (AI). Through the year, we got to hear of so many dramatic announcements and AI-related incidents that on reflection it seems to be all I thought about. The very first article I wrote this year was an analysis of the New York Times’ copyright lawsuit against OpenAI—a litigation that at the time was thought likely to upend the way AI models are built. I argued in favour of a fair-use exemption, suggesting that it was needed if we wanted to benefit from all that AI has to offer.

In a similar vein, I argued that we needed to think differently about product liability for AI, arguing that the binary approach that has stood us in good stead so far is poorly suited to the probabilistic nature of AI. As much as I support open-source AI, I grew increasingly concerned about our reliance on it, particularly considering that attempts are being made to prevent the export of these models outside of the US. Given the many ways in which open-source AI is being practically applied in India (in education, for instance), I believe it will be a last-mile solution for DPI, reaching people and places that infrastructure alone cannot.

I remain concerned about the second order consequences of incorporating AI into our lives. In the legal industry, for example, designed as it has been to train lawyers on the job, I worry that the AI efficiencies that we achieve will come at the cost of the training that our young upcoming lawyers need.

Looking Ahead to 2025

As we look to the year ahead of us, my ardent hope is that we will finally get to see the draft data protection rules, and that after due consultation, the law is finally brought into force. When that happens, companies big and small will have to radically reorganize their businesses to comply with the new obligations. This will likely be the most significant compliance burden imposed on them in modern history.

I expect that AI will continue to advance, even if not exactly in the way we expect it to. While we have so far focused our concerns on its first order consequences (fake news, misinformation and the transparency of knowledge), I believe it is AI’s second order consequences (the future of work, the ways in which we learn and other issues still not evident to us) that we really ought be worrying about. I suspect that we will in 2025 start to get a sense of what the real harms of AI actually are.

But there are other new and exciting possibilities around the corner. Late in the year, Google announced a stunning breakthrough in quantum computing, a development so transformational that should we find some practical, real-world applications for it, any predictions I make in this article will become instantly irrelevant. Once quantum computing becomes conventional, the world as we know it will change, and along with it all the governance frameworks that we currently rely on. Similar technological progress is taking place in biology, where we will, sooner than we think, start to see practical applications of computational and quantum biology that will transform health, wellness and the quality of our lives.

Humanity has never known when it has been standing on the threshold of radical transformation. But when change comes, we struggle to come to terms with it, often enacting knee-jerk regulations before we’ve fully understood exactly what it is that we need to safeguard against. I hope our responses to this next transformation will be measured. That we will take the time to reflect on the long-term societal benefits of these new technologies, as reactionary regulatory responses will only end up retarding the progress that we have to make.