Thu. Nov 14th, 2024

Time 100 AI list contains at least 5 people who quit OpenAI due to safety concerns

One of the five, Jan Leike, left OpenAI in May 2024 after losing confidence in the company's approach to safety

Time has released its 2024 100 Most Influential People in AI list, and it features no fewer than five people who have left their roles at OpenAI due to worries about the company’s ability to safely develop artificial general intelligence.

One such individual is whistleblower Daniel Kokotajlo, who famously forfeited 80% of his net worth upon leaving OpenAI by refusing to sign a nondisparagement agreement. Kokotajlo has since campaigned for a ‘right to warn‘ for employees of AI companies, who he says are developing systems that could be “destabilizing in the short term and catastrophic in the long term.”

Last year saw the first Time 100 AI list released, which included veteran researchers Yoshua Bengio and Geoffrey Hinton, CEOs Demis Hassabis and Sam Altman, Max Tegmark, the author of the Pause Giant AI Experiemnts letter, and Eliezer Yudkowsky, a visionary in the field of AI alignment.

This year, Time’s list reflects the changes in the space over the last 12 months. Meta CEO Mark Zuckerberg joins the list following the release of Llama-3, Meta’s large language model, whose capabilities began to rival those of the latest Claude and Gemini models. The OpenAI Her controversy puts actress Scarlett Johansson on the list, joining podcaster Dwarkesh Patel and tech YouTuber Marques Brownlee.

AI company Anthropic was founded in 2021 by former employees of OpenAI who left to form a company with a greater focus on safety. Co-founder and CEO Dario Amodei, whose p(doom) sits at 10-25%, maintains his place on Time’s list, whilst founding Anthropic member Amanda Askell is included for the first time this year.

Jan Leike, who joined Anthropic in May, and Ilya Sutskever, who recent started his own AGI company, Safe Superintelligence Inc., are also featured. Both are formerly of OpenAI. Leike served as the head of the Superalignment team at OpenAI, but quit due to his belief that the company was eschewing its safety responsibilities in favour of “shiny products.”

“Stepping away from this job has been one of the hardest things I have ever done, because we urgently need to figure out how to steer and control AI systems much smarter than us. […] Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products. We are long overdue in getting incredibly serious about the implications of AGI. We must prioritize preparing for them as best we can. Only then can we ensure AGI benefits all of humanity.”

Similar comments were made by fellow former member of OpenAI’s alignment team William Saunders, who has since joined Kokotajlo in asking for protections for whistleblowers.

Read more: OpenAI is the ‘Titanic of AI’, claims former safety researcher

As a member of OpenAI’s board, Sutskever led a move to remove Sam Altman from his position as CEO in November 2023. Sutskever reportedly told employees at an emergency meeting that the board was “doing its duty, […] which is to make sure that OpenAI builds AGI that benefits all of humanity.” Altman was controversially reinstated as CEO 5 days later.

Another member of the board during that time was Helen Toner, who stepped down after Altman’s reinstatement. Toner is also featured in Time’s list, focusing on her work consulting with lawmakers. She has said that Altman was not ‘consistently candid’ with the board, and that he had provided inaccurate information about the company’s approach to safety.

Following the hectic events at the end of 2023, Toner argued that private companies cannot be trusted to self-regulate, and that ‘external oversight’ is needed.

The full Time 100 AI list can be found here.

Related Post