In the last week of 2023, the New York Times sued OpenAI and Microsoft for copyright infringement. The allegations in the complaint go to the core of how generative AI works and could shape the manner in which AI works going forward.
2023 was the year in which DPI assumed its rightful place on the world stage. It was also the year in which artificial intelligence came into its own. There has never been a more interesting time to be engaged in technology policy.
The European Union has agreed to a new law to regulate artificial intelligence (AI) by imposing transparency requirements on general AI models and stronger restrictions on more powerful models. The US offers a broader, more nuanced framework. However there exists a North-South divide - with the Global South viewing AI as beneficial as contrasted to the more risk-focused approach of the Global North.
When man invented writing he enabled the creation of a hive-mind that eventually lead to the establishment of civilisation as we know it. The advent of large language models has exponentially expanded that hive-mind but has it done so at the cost of our humanity?
The events surrounding OpenAI and its CEO Sam Altman highlight the challenges in establishing effective governance structures that can appropriately control AI development. Given the profit motivation of private enterprise and the other narrow commercial interests that they are constrained by, we need to develop alternate robust frameworks that can operate beyond the influence of private commercial entities.
The myth of Pandora’s box, where opening a forbidden container unleashed the world’s evils but also hope, parallels scientific discovery. Each breakthrough, like CRISPR’s medical potential, brings unforeseen challenges, as seen with its controversial use in gene editing. Technologies intended for good, like the internet or drones, can be subverted for harm. Regulation alone can’t contain such knowledge; instead, we must design incentives to align technology use with societal goals, preparing us to handle the inevitable consequences of human curiosity.
Society’s response to disruptive technologies like AI follows a three-stage pattern: regulation, adaptation, and acceptance. Regulations tend to focus on first-order concerns, but overlook second-order consequences like the potential erosion of democratic values due to increased transparency of knowledge.
Worldcoin has been designed to address the concern that in a world saturated with artificial intelligence we are going to need a proof of humanness. As true as that might be, I believe we need to go much further. And also tackle truth in content.
Large language models require training data sets in order to continuously improve. However, given the rate at which models are growing we are soon going to run out of training data. And synthetic data is not the solution we thought it might be.
We tend to think of technology as either “good” nor “bad” based on the outcomes it has. This is futile as in most instances any harms that may be caused by technology is on account of how it is used and by whom.