The week beginning October 30, 2023, was a busy week for AI policymakers: On Monday, the US released President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, and the G7 announced its agreement on Guiding Principles and a Code of Conduct on artificial intelligence. And on November 1 and 2, around 150 representatives from governments, industry, and academia around the globe congregated at the UK AI Safety Summit, convened by UK Prime Minister Rishi Sunak. In this blog post, we analyze the UK AI Safety Summit and its results; you can find our analysis of the G7 declaration here.
Signaling Successes: The Bletchley Declaration And AI Safety Institutes
Many have commented on the blandness of the Bletchley Declaration. Isn’t it obvious that everybody would agree that we need to take AI safety seriously and that we should do something about it? But it’s the very blandness of the Declaration that ensured that both China and the US as well as the EU were among the 29 signatories. And credit where it’s due: It’s an achievement in itself that Rishi Sunak and his team managed to get all of those countries (many represented at the top level) and other participants to congregate in the same (highly symbolic) place for two days of workshops and discussions on the topic of AI safety and to agree on anything at all.
The establishment of AI Safety Institutes by both the UK and the US (each announced at the UK AI Safety Summit) should also be regarded as a success, political jockeying notwithstanding. The forthcoming scientific assessment of existing research on the risks and capabilities of frontier also deserves applause, as does the Summit attendees’ support for the leader of the research, Professor Yoshua Bengio. Aside from being an award-winning AI academic, he also provides a link to the UN as a member of its Scientific Advisory Board. It’s worth noting, though, that not everybody signed up to everything: China was not among the countries and entities that agreed to the proposal about AI model testing.
All in all, the Summit really was about setting signals and demonstrating willingness to cooperate. We’ll have to wait and see to what degree the good intentions expressed at the Summit will translate into meaningful action.
Missed Opportunities: Too Much Speculation, Too Little Focus On Real, Present Harms
While the Bletchley Declaration and other pronouncements make all the right noises about research and collaboration, they overemphasize far-out, potentially apocalyptic scenarios and “frontier AI.” And they say nothing of substance about how to address today’s issues around AI safety.
The UK government’s continued insistence that the UK doesn’t need AI regulation was criticized by many who feel that not enough is being done to rectify AI-related harms that are already happening. Considering the Biden Executive Order and the EU’s impending AI Act, the UK’s position on regulation also raises the specter that the UK might become a playground for companies wanting to “test” their AI models by rolling them out to a wider population without undergoing the checks that might be required in other jurisdictions.
Rishi Sunak’s chat with Elon Musk exacerbated the sense that leaders are neglecting the here and now. Unsurprisingly, the press jumped on Musk’s pronouncement that none of us would need jobs anymore in the future because AI was going to take care of everything and that we’ll all have a “universal high income.” It’s much safer to focus on distant, possibly unrealistic, scenarios than it is to discuss how we might address the problems we already have today that are caused by algorithms and AI models such as wrongful arrests, criminal justice failings, defamation, or revenge porn. While the Rishi/Elon show had a certain entertainment value, it didn’t help us understand what to do now. If anything, it reinforced existing misconceptions around both the potential of, and dangers from, AI.