Keir Starmer downplays AI’s existential risk in new action plan

In a speech announcing the government’s new AI Opportunities Action Plan, the Prime Minister described some of the reaction to AI development as “fears of a small risk”. (image from Number 10)

Starmer laid out the UK’s plan to drive economic growth, encourage investment, and create jobs by increasing public computing power by a factor of 20 and establishing ‘AI Growth Zones’ to host large AI data centres.

The previous government, led by Rishi Sunak, reacted quickly to warnings from experts of the extreme risks posed by increasingly intelligent artificial intelligence. The UK launched the world’s first AI Safety Institute and hosted the inaugural AI Safety Summit in 2023, an event attended by leading politicians from the US, China, and the EU. Twenty eight countries signed the resulting Bletchley Declaration, which acknowledged AI’s potentially catastrophic harms.

Figures in AI such as Nobel laureate Geoffrey Hinton (who believes there is a greater than 50% chance of AI causing human extinction), Yoshua Bengio, and Stuart Russell all began to raise the alarm in 2023 following the release of OpenAI’s ground-breaking large language model GPT-4. Letters were signed warning of the existential risk, and calling for a 6-month pause on further capabilities advancements.

Read more: Geoffrey Hinton’s p(doom) is over 50%

Starmer, and the incoming secretary of state for Science, Innovation, and Technology, Peter Kyle, have both praised Sunak for launching the AI Safety Institute, with Starmer calling the UK’s safety infrastructure ‘world leading’.

However, the Prime Minister has parted ways somewhat with his predecessor.

“New technology can provoke a reaction. A sort of fear, an inhibition, a caution if you like. And because of fears of a small risk, too often you miss the massive opportunity. So we have got to change that mindset. Because actually the far bigger risk, is that if we don’t go for it, we’re left behind by those who do.”

The Prime Minister also wrote in the Financial Times that Britain will “test AI long before we regulate, so that everything we do will be proportionate and grounded in science”.

AI company Anthropic recently released research that showed one of its models to fake alignment during training, so that it could appease human researchers and hide its true values. OpenAI’s o1 model also attempted to avoid being shut down by humans, and later lied about its deceptive behaviour to researchers.

Read more: New OpenAI model tries to avoid being shut down by humans, lies about it

This research, and other issues in our current approach to AI alignment, raise doubts about our ability to accurately test the capabilities and values of models before deployment. Proactive regulation that does not allow companies to create superintelligence has been suggested by some experts in the field.

Ian Hogarth, the current chair of the AI Safety Institute, praised Keir Starmer’s ‘great leadership’ after he announced the government’s AI action plan. Hogarth has previously wrote an article in the Financial Times titled ‘We must slow down the race to God-like AI’. He also signed the Center for AI Safety’s statement on AI’s risk on extinction, and the Future of Life Institute’s letter calling for a 6-month pause on frontier AI development.

Matt Clifford, who composed the government’s AI action plan, said in 2023 that AI could ‘kill many humans’ in the following two years.

One thought on “Keir Starmer downplays AI’s existential risk in new action plan

Leave a Reply

Your email address will not be published. Required fields are marked *