Joe Biden has withdrawn from the 2024 US Election, leaving his vice president, Kamala Harris, as the new front-runner for the Democratic nomination.
With the looming prospect of artificial general intelligence around the corner, this election may prove vital if we are to avoid the most catastrophic outcomes of AI, such as human extinction.
Whilst Harris has made it clear that she believes AI could ‘endanger the very existence of humanity’, and that such risks are ‘without question, profound, and demand global action’, she has also called for an increased focus on the the current harms of AI.
“When a woman is threatened by an abusive partner with explicit deepfake photographs, is that not existential for her? When a young father is wrongfully imprisoned because of bias? Is that not existential for his family? And when people around the world cannot discern fact from fiction because of a flood of AI enabled myths and disinformation, I ask, is that not existential for democracy?”
Many have argued that we can begin to address both the current and the not-yet-realised threats of AI with the same legislation. Connor Leahy, CEO of AI safety company Conjecture, believes that the harms posed by deepfakes and the existential threat posed by AGI both stem from the same problem; a lack of a liability for AI companies.
Leahy campaigns with ControlAI for the regulation of deepfakes, as he believes it would be a step towards targeting ‘the entire supply chain’ of artificial intelligence. If we want to deal with the risks posed by AI, he argues that ‘it is insufficient to just punish, say, people who use the technology to cause harm. You also have to target the people who are building this technology’.
After the unprecedented speed of AI capabilities advancements in 2023, the Biden administration signed the AI Executive Order, which requires companies training frontier models to share their safety data with the government. They also announced the establishment of the US AI Safety Institute, which Harris said would ‘create rigorous standards to test the safety of AI models for public use’.
The Republicans have pledged to repeal Biden’s executive order, claiming it ‘hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology’, despite polling showing that both Democrats and Republicans supported it. The move concerned those in the AI safety community who hope to avoid AI X-risk becoming a partisan issue. It’s not clear what Donald Trump’s personal stance on AI regulation is, but he has referred to the technology as ‘maybe the most dangerous thing out there’.
Read more: Despite strong bipartisan support, Republicans pledge to repeal AI Executive Order
At the 2023 AI Safety Summit, Harris promised that the United States would continue working with governments around the world to promote ‘AI safety and equity.’ She also took issue with what she believes is a false dichotomy between safety and innovation.
President Biden and I reject the false choice that suggests we can either protect the public or advance Innovation. We can and we must do both. The actions we take today will lay the groundwork for how AI will be used in the years to come. So I will end with this – this is a moment of profound opportunity. The benefits of AI are immense. It could give us the power to fight the climate crisis, make medical and scientific breakthroughs, explore our universe, and improve everyday life for people around the world.
In March 2024, Harris announced rules requiring federal agencies to prove their AI tools do not ‘endanger the safety and rights of the American people’ before use.