If anyone builds it, everyone dies, is the AI safety community’s most recent attempt to gain greater public awareness and exposure to the ideas surrounding risk from superintelligent AI. Authored by Nate Soares and Eliezer Yudkowski, it is among the most awaited book publications in the field, and has received endorsements from prominent figures in a variety of fields, such as R.P. Eddy, a former Director in the U.S. National Security Council, as well as Ben Bernanke, a Nobel-winning economist and former Chairman of the U.S. Federal Reserve.
Yudkowsky and Soares lay out both theory and concrete evidence, presenting a possible extinction scenario, and challenge readers to imagine how superintelligent systems might evolve not through malice, but through logical consequence: utility maximisation, instrumental goals, and the inability of alignment techniques to reliably restrain power. The authors warn that we are dramatically underprepared, and assert that many current AI labs and private entities are building systems whose inner workings are poorly understood.
Though their tone is urgent—and for some readers, alarmist—the book seeks not just to warn but to mobilize. It urges policy change, public awareness, and a fundamental shift in how society treats AI safety. Whether one accepts all their premises or not, If Anyone Builds It, Everyone Dies promises to sharpen the debate about whether superhuman AI poses an unavoidable risk—and what, if anything, can be done to avert catastrophe.