Dario Amodei ups his P(doom) to 25%

Anthropic CEO Dario Amodei ups his P(doom), the risk of catastrophic existential failure to 25% in a recent interview with Axios. Amodei had previously put his P(doom) in the 10-25% range. He contrasted this by saying that he believes there to be a 75% chance for things to go “really, really well”.

Amodei holds a physics PhD from Princeton University, and is a former OpenAI member, having left the company in 2021 to set up Anthropic AI with his sister, with the mission to “build systems that people can rely on and generating research about the opportunities and risks of AI.” This was a consequence of Amodei’s frustrations with OpenAI, feeling that the company was failing to meet its AI safety promises.

Amodei is by no means alone with his fears, given how several prominent AI thinkers have voiced deep concerns about the existential and societal risks of artificial intelligence. Geoffrey Hinton, the “godfather of AI,” left Google in 2023 to speak freely about his fears that advanced systems could surpass human control, placing his P(doom) in the 10-20% range by 2050, later increasing his upper bound to 50%. Eliezer Yudkowsky, founder of the Machine Intelligence Research Institute, has argued that without strict global oversight, AI could rapidly become catastrophic for humanity, representing the more doomerist school-of-thought within the AI intellectuals, placing his P(doom) above 95%. Meanwhile, Max Tegmark of the Future of Life Institute warns that uncontrolled AI development could destabilize civilization itself, without specifying an exact range. Elon Musk, has also expressed caution and supported a 10-20% range for his personal P(doom) estimate, prior to scaling up X AI.

Leave a Reply

Your email address will not be published. Required fields are marked *