Artificial Intelligence

Learning from Failure

We need to encourage a culture of failure around AI so that when it fails we can understand why and disseminate those learnings throughout the industry. It is only when we can fail without fear that we will learn to do what it takes to build safe AI systems.

CAS Regulations for AI

The PM-EAC suggests that AI should be regulated as a complex adaptive system. While there is a lot to say about this approach, in its articulation, the paper fails to take into account many of the essential features of modern AI.

Liars Dividend

There is widespread consternation around the impact that deep-fakes are going to have on all of society this year. But most legislative counter-measures are oriented towards shooting the messenger. We need a different path. Thankfully we have been here before.

NYT v. the LLMs

In the last week of 2023, the New York Times sued OpenAI and Microsoft for copyright infringement. The allegations in the complaint go to the core of how generative AI works and could shape the manner in which AI works going forward.

Looking Back on 2023

2023 was the year in which DPI assumed its rightful place on the world stage. It was also the year in which artificial intelligence came into its own. There has never been a more interesting time to be engaged in technology policy.

AI for the Global South

The European Union has agreed to a new law to regulate artificial intelligence (AI) by imposing transparency requirements on general AI models and stronger restrictions on more powerful models. The US offers a broader, more nuanced framework. However there exists a North-South divide - with the Global South viewing AI as beneficial as contrasted to the more risk-focused approach of the Global North.

Human Writing

When man invented writing he enabled the creation of a hive-mind that eventually lead to the establishment of civilisation as we know it. The advent of large language models has exponentially expanded that hive-mind but has it done so at the cost of our humanity?

Governing the Governors

The events surrounding OpenAI and its CEO Sam Altman highlight the challenges in establishing effective governance structures that can appropriately control AI development. Given the profit motivation of private enterprise and the other narrow commercial interests that they are constrained by, we need to develop alternate robust frameworks that can operate beyond the influence of private commercial entities.

Pandora's Box

The myth of Pandora’s box, where opening a forbidden container unleashed the world’s evils but also hope, parallels scientific discovery. Each breakthrough, like CRISPR’s medical potential, brings unforeseen challenges, as seen with its controversial use in gene editing. Technologies intended for good, like the internet or drones, can be subverted for harm. Regulation alone can’t contain such knowledge; instead, we must design incentives to align technology use with societal goals, preparing us to handle the inevitable consequences of human curiosity.

Managing AI Disruption

Society’s response to disruptive technologies like AI follows a three-stage pattern: regulation, adaptation, and acceptance. Regulations tend to focus on first-order concerns, but overlook second-order consequences like the potential erosion of democratic values due to increased transparency of knowledge.