Artificial Intelligence


We tend to resist change. We worry about the ways in which it could alter our existing way of life and the harms that could result as a consequence. But these technological changes almost always end up being nowhere near as frightening as they first seemed.

Learning from Failure

We need to encourage a culture of failure around AI so that when it fails we can understand why and disseminate those learnings throughout the industry. It is only when we can fail without fear that we will learn to do what it takes to build safe AI systems.

CAS Regulations for AI

The PM-EAC suggests that AI should be regulated as a complex adaptive system. While there is a lot to say about this approach, in its articulation, the paper fails to take into account many of the essential features of modern AI.

Liars Dividend

There is widespread consternation around the impact that deep-fakes are going to have on all of society this year. But most legislative counter-measures are oriented towards shooting the messenger. We need a different path. Thankfully we have been here before.

NYT v. the LLMs

In the last week of 2023, the New York Times sued OpenAI and Microsoft for copyright infringement. The allegations in the complaint go to the core of how generative AI works and could shape the manner in which AI works going forward.