Artificial Intelligence

Learning from Failure

We need to encourage a culture of failure around AI so that when it fails we can understand why and disseminate those learnings throughout the industry. It is only when we can fail without fear that we will learn to do what it takes to build safe AI systems.

CAS Regulations for AI

The PM-EAC suggests that AI should be regulated as a complex adaptive system. While there is a lot to say about this approach, in its articulation, the paper fails to take into account many of the essential features of modern AI.

Liars Dividend

There is widespread consternation around the impact that deep-fakes are going to have on all of society this year. But most legislative counter-measures are oriented towards shooting the messenger. We need a different path. Thankfully we have been here before.

NYT v. the LLMs

In the last week of 2023, the New York Times sued OpenAI and Microsoft for copyright infringement. The allegations in the complaint go to the core of how generative AI works and could shape the manner in which AI works going forward.

Looking Back on 2023

2023 was the year in which DPI assumed its rightful place on the world stage. It was also the year in which artificial intelligence came into its own. There has never been a more interesting time to be engaged in technology policy.