The Future of Life Institute (FLI) came out with its anticipated statement regarding superintelligence in late October, 2025. It reads “We call for a prohibition on the development of superintelligence, not lifted before there is: 1. Broad scientific consensus that it will be done safely and controllably and 2. Strong public buy-in.”
The statement has received coverage due to various public signatories, including notable AI experts like Geoffrey Hinton and Yoshua Bengio, but also Prince Harry and his wife, the actress Meghan Markle and the British author Stephen Fry.
The FLI provides its signatories with context on existential and suffering risk resulting from superintelligence, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks, proceeding to highlight the worst case scenario; that being unintentional human extinction.
Despite their shared goal surrounding AI safety, the approach of the FLI is somewhat different from the Center for AI Safety (CAIS) statement. The latter of the two focuses on advancing research into alignment and control rather than prohibiting frontier development altogether. CAIS calls for coordinated global governance and responsible progress instead of a full moratorium, arguing that halting development outright could hinder safety research.
Another natural point of comparison is PauseAI’s statement, the most alarmist of the three. In its statement, PauseAI advocates an even stronger stance than the FLI, calling for an immediate and indefinite pause on frontier AI development, stressing that existing safety measures and governance structures are vastly insufficient.
