AI safety mandates opposed by just 7% of Americans

A poll of American voters has revealed that the vast majority support legislation requiring AI companies to implement ‘safety measures and security standards’ for their most powerful models.

Respondents also said that the US AI Safety Institute, which is currently limited to creating voluntary safety standards, should have legal authority to ensure that advanced artificial intelligence is developed responsibly.

When voters were asked about their preferred approach to AI regulation, just 7% favoured ‘no regulation’, preferring to place full responsibility of model harms on the users rather than imposing any regulations on the developers. In contrast, 76% of voters opted for ‘safety mandates’, defined as requirements that AI companies must meet before releasing a model, ensuring that extreme risks are mitigated, such as the creation of bioweapons or the launching of cyberattacks.

ApproachSupport
No regulation7%
Safety mandates76%
Not sure17%

A temporary ban on building AI systems more powerful than those already in existence was also a far more popular option than no regulation at all, with 46% of voters favouring the ban and 18% in opposition.

The poll from the AI Policy Insitute (AIPI) comes at a time when SB 1047, an AI safety bill in California, has been watered down before a final vote in the State Assembly. SB 1047 has proven popular with voters, but a relentless lobbying campaign from venture capital firm Andreesen Horowitz and others has sought to block the bill.

In response, veteran AI researchers Yoshua Bengio, Geoffrey Hinton, and Stuart Russell wrote a letter in favour of the bill, debunking many of the false claims made by its opponents.

Read more: SB 1047 ‘reasonable first step’ according to Hinton, Bengio, and Russell

Respondents to AIPI’s poll were also more concerned with reducing the risk of cyber attacks and bioweapons than they were with dealing with bias and misinformation, with 55% opting for the former. Some have claimed that discussion of ‘hypothetical’ catastrophic risks is merely a big tech tactic to distract the public from the present harms of the technology, but Max Tegmark (president of the Future of Life Institute) makes the point that it would be a pretty ‘galaxy-brained’ move for OpenAI CEO Sam Altman to claim his technology could literally kill everyone on the planet in an attempt to avoid regulation that may harm the company’s revenue.

What’s more, if Kamala Harris were to become president, most voters would rather she prioritise ‘reducing the risks of AI accidents and misuse’ over ‘preventing AI from being in the hands of just a few companies’. Americans, it would seem, take the extreme risks of AI very seriously, and want politicians to act.

Leave a Reply

Your email address will not be published. Required fields are marked *