Statement on Superintelligence
The Future of Life Institute (FLI) came out with its anticipated statement regarding superintelligence in late October, 2025. It reads…
The latest on existential risk.
The Future of Life Institute (FLI) came out with its anticipated statement regarding superintelligence in late October, 2025. It reads…
Anthropic CEO Dario Amodei ups his P(doom), the risk of catastrophic existential failure to 25% in a recent interview with…
If anyone builds it, everyone dies, is the AI safety community’s most recent attempt to gain greater public awareness and…
Protesters around the world urged attendees of the Paris AI Summit to continue previous summits’ focus on AI safety. PauseAI,…
In a speech announcing the government’s new AI Opportunities Action Plan, the Prime Minister described some of the reaction to…
A recent poll has revealed that 73% of Americans believe the safety testing of AI models is more important than…
SB 1047, a Californian AI safety bill, has gained the approval of hundreds of influential actors, musicians, writers, and polticians.…
Yi Zeng, a member of the UN Advisory Body on AI, has announced the formation of the Beijing Institute of…
65% of California voters say they would hold Governor Newsom responsible for an ‘AI-enabled catastrophe’ if he were to overrule…
A poll of American voters has revealed that the vast majority support legislation requiring AI companies to implement ‘safety measures…