Thu. Nov 14th, 2024

LinkedIn founder Reid Hoffman has a p(doom) of 20%

There is a 2 in 10 chance of human extinction from AI, according to LinkedIn founder Reid Hoffman

Internet entrepreneur Reid Hoffman believes there is a 2 in 10 chance that artificial intelligence will ‘eliminate humanity’.

Hoffman, who co-founded social network LinkedIn in 2002, was a founding investor of OpenAI. He has since founded Inflection AI along with Mustafa Suleyman, the CEO of Microsoft AI.

In an interview with PBS NewsHour, Hoffman was asked to estimate the likelihood of human extinction as a result of artificial intelligence. This probability, also known as p(doom), has been the subject of fierce debate amongst AI researchers. Answers can wildly vary, but polling from AI Impacts found the average probability given by researchers in the field to be 14.4%.

Hoffman is slightly above that number, at 20%. The interviewer, Paul Solman, said he would still be ‘out of there’ even if there was only a 1 in 10 chance of a ticking time bomb in his room.

Eliezer Yudkowsky, who also featured in the PBS NewsHour segment on AI X-risk, believes the real figure to be over 95%. Yudkowsky founded the Machine Intelligence Research Institute to work on the AI alignment problem in 2001, and is one of the founders of the field. “Things have gone a bit worse than hoped for,” he explained, referring to the rapid improvement in AI capabilities over the last decade or so.

“The sting at the end of this is A.I. gets smarter than us, is poorly controlled, and probably humanity becomes collateral damage to its own expansion. […] It is smarter than humanity. From its perspective, it now wants to get independence of humanity. It doesn’t want to be running on computers that require electricity that humans have to generate.”

Despite Hoffman’s worrying assessment of the risks, he is still in favour of advancing AI development. Citing other existential risks such as nuclear war, asteroids, and pandemics, he believes the overall risk portfolio could be lowered with artificial general intelligence.

Given his p(doom), and the non-AI-related estimates provided by Toby Ord in The Precipice, one of the leading books on existential risk, Hoffman’s belief doesn’t hold water. Ord puts a 1 in 6 probability on human extinction over the next 100 years, with the majority of the risk coming from unaligned artificial intelligence. The p(doom) given in the book as actually lower than Hoffman’s, at 10%.

Of course, Hoffman may well disagree with the rest of Ord’s risk assessment (1 in 1,000 for nuclear war and climate change, 1 in 30 for engineered pandemics, etc.), but assuming it – and Hoffman’s p(doom) – are accurate, building artificial super intelligence in order to mitigate the roughly 1 in 10 chance of extinction from other sources would itself impose a 2 in 10 chance of extinction.

Read more: Geoffrey Hinton’s p(doom) is over 50%

Related Post