Why PauseAI protesters are demanding a treaty on AI

PauseAI protesters outside the UK Foreign Office

On a freezing cold November night in London, a dozen or so protesters stand outside the UK Foreign Office.

They’re there as part of PauseAI, a grassroots advocacy group formed in 2023 to raise public awareness of the extinction risk posed by artificial intelligence and to convince lawmakers around the world to pause the development of advanced AI models.

This week, representatives of 10 countries are meeting in San Francisco for a summit on AI risks, and PauseAI argue that the UK Government should “come back with a treaty”.

We spoke with two of the organisers at the protest. Transcripts of the conversations are below (lightly edited for improved readability).

Why are you here today?

Joseph Miller (AI safety researcher): “I’m here because AI could potentially be the end of the human race, and currently nobody has a plan to fix this.

So until they do, we need to stop developing the most powerful AI models.”

William Baird (executive director of PauseAI UK): “We are here today to ask the UK government to come back with a treaty.

There is an important summit happening in San Francisco today and tomorrow. We would like to see the UK government actually bring something back from there that is tangible. An actual treaty proposal to meaningfully regulate the AI that is being developed.”

PauseAI protesters hit the pause button on AI
PauseAI want to convince lawmakers to press the pause button on frontier AI

What are the potential risks if AI goes unregulated? 

Joseph: “So, the potential risk is that we create an AI, which is more capable in every way than a human. It has a mind that is more intelligent than anyone that’s ever lived, and potentially a robotic body that is more capable than any human. We should expect, in that case, that the more intelligent agent has control over the future. Just as humans can impose their will on other animals because of our greater intelligence, AI will be able to impose its will on us because of its greater intelligence.

So then the question is, what would the AI choose to do? And the answer that AI experts have concluded is that we have no idea. We currently have no ability to really direct with any precision what AIs target, what goals they have. Therefore, it would probably pursue something that appears to us to be pretty much a completely random objective.

In this case, this would actually be extremely dangerous for humans. Because whatever this random objective would be, it would likely be completely disconnected from anything that humans value. And therefore, humans may try to prevent it from achieving its goals, and therefore, the AI, as an instrumental goal towards achieving its aims, will potentially kill all humans on Earth.”

William: “So, if you listen to the most skilled researchers in this field, it is very clear that this can cause extinction. If we let it happen, we don’t want that to happen. I would like humanity to continue living, I would like these risks to be mitigated. Our clear ask is a halt on the frontier of AI development until proof of safety.

We do not understand how these things think, we do not understand how they make the choices the AIs make. And this is problematic if we keep scaling to more and more intelligent levels. Humans are in control of this planet because we’re the most intelligent. If we make AI systems more intelligent than ourselves, we can no longer control or contain them. And if you want an understanding of what that looks like, look what humans did to Neanderthals.”

There are those who would say that while your cause is noble, a global pause on AI development isn’t realistic. They argue that if we want a positive outcome here, Western countries need to develop advanced AI first, ahead of countries like Russia and China. What’s your response to that?

Joseph: “The world has previously managed to ban dangerous technologies. CFCs harm the atmosphere and were potentially going to destroy the ozone layer. And the world managed to collectively decide not to do that. Similarly, we have nuclear arms controls that have managed to somewhat prevent the escalation of nuclear conflicts. Similarly with AI, This is a technology that is just so dangerous, everybody loses if it’s created.

It doesn’t matter whether the U.S. or China creates AI first, because neither of them have any ability to control the thing that they will create. Therefore, it’s a complete lose-lose, and really, the only way forward is to just stop creating these things.”

This is obviously a very heavy subject that has some potentially catastrophic outcomes. On a personal level, how does that make you feel, and how do you deal with that?

William: “On a personal level? I think everyone’s kind of insane as someone who’s met the scientists, as someone who’s read the literature. This is really clearly something that will kill us if we don’t manage it correctly.

It just seems utterly insane that everyone’s standing around doing nothing. This (the Center For AI Safety’s Statement on AI Risk) is a statement that’s been signed by a whole range of experts, thousands of experts in this field. This has been something that’s been openly talked about by the European Union, by the ex-prime minister of this country, Rishi Sunak. It just seems so bizarre. Everyone’s just standing around doing nothing when we have a very clear threat and we aren’t responding to it.

Our inability to manage this is Britain’s biggest national security threat. It just seems utterly unpatriotic for Britain to be sitting around and doing nothing. Well, not doing nothing, but doing almost nothing.”

Joseph: “On a personal level, I do feel very concerned, very worried about how this could end up for humanity. I worry that my friends and my family will all be killed by AI and our lives will be cut short. And yeah, this will be a tragedy. And I’m very concerned that nobody is really taking this problem seriously. Very few people are really trying to actually figure out how to get to a safe future.

And that’s what PauseAI is all about. It’s about finding a very realistic, implementable solution, which is not to hope for miracles, not to search for some technological solution that may be out of reach. We just need to do the simplest thing, which is to warn the world of this danger and stop the development of these AIs.”

For those people that are reading/listening to this interview, what is your call to action? What would you like people to do, where should they go, what should they learn more about?

Joseph: “Yeah, there’s so much you could do to help. First of all, everyone can join PauseAI. Everyone can have their voice in the conversation. We do activities like writing to your representative, protests, articles, videos. Everybody can help spread this message, to help inform the world of this incredible danger.

And if you have special skills, if you’re a technical researcher, if you’re a policymaker, if you have skills in advertising or lobbying, then you can work directly to influence politicians or help to change the policies of the AI labs, or you can work directly on technical solutions that might help to prevent an AI catastrophe.”

William: “If you are not technically skilled or politically minded, you can attend a protest and show that you actually value your life, that you actually want this to be halted and want this to be contained well. 

The second thing you can do is join one of our lobby sessions. We write emails to members of Parliament. Most members of parliament of most countries are completely in the dark about this.

I have met members of parliament from multiple countries, and most of them, almost every single one of them, was surprised when I showed them “this is what AI can currently do, and this is how fast it’s being developed”. So the two things I ask is attend a protest and come to one of the lobby email writing sessions.”

Joseph Miller answering questions at the PauseAI protest in London

And finally, given everything you’ve said so far, what probability would you put on human extinction as a result of a misaligned artificial super intelligence?

Joseph: “My best estimate right now is that there’s about a 60% chance that humans don’t survive the next century. And that’s because of AI. In fact. More strongly, I think there’s about a 60% chance that at some point in the next decade or two, AI becomes super intelligent and kills all humans on Earth.”

Leave a Reply

Your email address will not be published. Required fields are marked *