SB 1047 ‘reasonable first step’ according to Hinton, Bengio, and Russell

California’s AI safety bill has received strong support from a group of experts in the field, who describe the legislation as ‘the bare minimum for effective regulation of this technology’.

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, better known as SB 1047, was introduced by Senator Scott Wiener earlier this year, and passed the California Senate 32-1. If made law (for which it still needs to pass the state assembly and then be signed by Governor Gavin Newsom), it would require the developers of the most powerful models to perform safety tests before deployment.

A letter urging lawmakers to sign the bill into law has been written by Geoffrey Hinton (who quit his job at Google to warn of the potentially catastrophic risks of frontier AI), fellow veteran researchers Yoshua Bengio and Stuart Russell, and Lawrence Lessig, founder of Creative Commons and a Law professor at Harvard.

The authors say they are ‘deeply concerned’ about the severe harms that the next generation of AI models could cause if proper regulation is not brought in in time. We could soon have agentic AIs which match or surpass human capabilities in a wide range of fields. The risks that could come from such a situation are endless; cyber attacks, engineering of deadly pandemics, development of dangerous weapons, and widespread misinformation. The very least legislators can do, argues the letter, is require that AI companies training the most powerful models test them before deployment.

Regulation of artificial intelligence is still in its infancy, as the authors of the letter write:

“As of now, there are fewer regulations on AI systems that could pose catastrophic risks than on sandwich shops or hairdressers.”

The legislation would also provide protections for whistleblowers such as AI safety researcher Daniel Kokotajlo, who left OpenAI after becoming dismayed with their recklessness. After his departure, Kokotajlo forfeited a great deal of money in order to be free to criticise the company. In response to the letter, he also came out in support of SB 1047, not only for the whisteblower protections, but also for ‘the potential for pre-catastrophe enforcement of safety best practices’.

The bill has received some criticism from some in the tech world, who claim it may harm innovation in California. In response to the criticism, Bengio & Co. explain that SB 1047 only applies to models that cost over $100,000,000 to train – resources only available to the largest companies. As such, the vast majority of open-source developers and start-ups would remain unaffected by the bill.

OpenAI, Google, Amazon, Anthropic, and more have already made voluntary commitments to AI safety akin to those described in SB 1047. The bill would just make these voluntary commitments law, allowing us to not rely on for-profit companies to keep the interests of the public above the interests of their shareholders.

The bill has been shown to be very popular with voters, with 59% of Californians in favour and just 22% against.

Read more: California AI safety bill popular with voters

PauseAI, a campaign group advocating for a global pause on the development of frontier AI models (at least until it can be proven to be safe), focused on the use of the terms ‘first step’ and ‘bare minimum’ in the letter. Whilst PauseAI say the legislation is ‘better than nothing’, they argue that humanity needs to ‘globally stop AI companies from gambling with our future’. Bengio and Russell are both sympathetic to a pause, having signed the Future of Life Institute’s open letter calling for a 6-month pause on the training of models more powerful than GPT-4.

Leave a Reply

Your email address will not be published. Required fields are marked *