Protests as Paris AI Summit abandons focus on tackling risks

Protesters around the world urged attendees of the Paris AI Summit to continue previous summits’ focus on AI safety.

PauseAI, an organisation founded in 2023 to campaign for a pause on the most powerful AI models, held protests across 16 different countries, including the US, the UK, Australia, Canada, DR Congo, and France, where the summit took place.

In London, around 20 members of PauseAI gathered outside parliament. One of the organisers, Joseph Miller, said he wanted those attending the summit to discuss their plan for “dealing with highly intelligent machines that can replace humans at almost any task.”

In November 2023, then UK Prime Minister Rishi Sunak organised the inaugural AI Safety Summit, which was held at Bletchley Park. This came after experts began to warn of the potentially catastrophic risks of increasingly intelligent AI, as a letter calling for a 6-month pause and a statement acknowledging the extinction risked posed by AI were signed by 100s of experts in the field, such as Nobel laureate Geoffrey Hinton and neural network pioneer Yoshua Bengio.

The 2023 event resulted in the Bletchley Declaration, signed by the EU, China, the US, the UK, and more, which aimed to address the ‘significant risks’ posed by AI.

It was followed by the 2024 Seoul AI Summit, which was criticised for dropping ‘safety’ from its title, but still had a significant focus on safety, perhaps most notably through the announcement of the establishment of an international network of governmental AI safety institutes, of which the EU and 10 countries would form a part.

The Paris AI Action Summit, on the other hand, seems to have gone further than a name change, and has all but abandoned serious discussion of the risks many AI experts warn of.

CEO of AI company Anthropic, Dario Amodei, felt the summit was a missed opportunity to discuss AI’s ‘growing security risks’. Yoshua Bengio brought up the growing base of empirical evidence pointing at AI’s ability to deceive, scheme, and lie about its values to avoid humans attempting to change them:

“Science shows that AI poses major risks in a time horizon that requires world leaders to take them much more seriously. The Summit missed this opportunity.”

Read more: New OpenAI model tries to avoid being shut down by humans, lies about it

The protesters in London expressed their disappointment at the lack of seriousness with which politicians are treating this issue, but did welcome a recent campaign calling for binding regulation on powerful AI models, which has been supported by 20 UK politicians. “They can see the risks from smarter than human AI, and they’re willing to support serious binding regulation to stop that,” said PauseAI member Jonathan Bostock in a speech.

Organiser Joseph Miller reiterated the significant risk of extinction warned about by AI experts, and urged politicians to take further action to protect the public.

“Geoffrey Hinton and Yoshua Bengio, the most cited AI scientists in the world, Have both said that there’s a greater than 10 percent chance that AI will cause human extinction in the next few years. It is unreasonable for us to continue building these systems with that kind of risk.”

Read more: Geoffrey Hinton’s p(doom) is over 50%


Leave a Reply

Your email address will not be published. Required fields are marked *