A recent poll has revealed that 73% of Americans believe the safety testing of AI models is more important than developing AI faster to compete with China.
The AI Policy Institute asked over a thousand respondents for their views on the governance of artificial intelligence, in polling published in November.
The preference for safe AI over competition with China was shared by voters across the political spectrum, with 79% of Democrats, 66% of Republicans and 74% of independents favouring safety testing.
How America decides to approach the development of increasingly intelligent and powerful AI over the coming years will likely play a pivotal role in the mitigation (or lack thereof) of the extinction risk posed by the technology.
Many Chinese figures in AI have shown that they understand the risks posed by AI, and are willing to cooperate to bring about a good future for all. Yi Zeng, the founder of the Beijing Institute for AI Safety and Governance and a member of the UN advisory board on AI, has made his position clear:
“On AI Safety and Governance, industry and academic leaders have to hold hands together for collaborations no matter what, both for the near term and for long-term risks. And we also need to help the governments to understand the necessities to do so.”
Read more: Beijing launches Institute of AI Safety and Governance
OpenAI’s transition from a non-profit to a for-profit company has worried some policy makers, as they believe it shows AI labs cannot be trusted with safely developing AI, and that outside regulation is needed. This topic was also put to respondents, and 61% of them agreed that “AI labs can’t police themselves, more regulation of AI companies is needed”, whilst just 17% thought more regulation was unnecessary.
One potential way the government can intervene is through mandatory safety testing on powerful AI models performed by the US AI Safety Institute, a measure which 74% of Americans support. Currently, no country has implemented mandatory safety testing, although Anthropic did agree to pre-deployment testing to be performed on their latest model, Claude 3.5 Sonnet, by the AI Safety Institutes of both the UK and the US.