Most Americans care about the welfare of sentient artificial intelligences, and support a ban on the development of both sentient and smarter-than-human AI.
The poll from the Sentience Institute, the first of its kind, also found that 19% of respondents thought some existing AIs are sentient, but only 10% attributed the trait to ChatGPT.
In AI discourse, the word sentient can sometimes be confused with tangential concepts such as intelligence and agency. Sentience was defined in the poll as ‘the capacity to have positive and negative experiences’.
Jacy Reece Anthis, cofounder of the Sentience Institute, is one of the leading figures researching digital sentience. He feels there is far more work to be done to develop a deeper understanding of the topic, and told us that building an ‘adaptive vocabulary of digital minds’ would help to unpack the broader concept of sentience into its constituent parts, and allow us to better judge which minds are capable of which experiences.
Personally, I’m an eliminativist, so I think that these big ideas like sentience or self-awareness will go down the path that life went. […] Just as life was unpacked into reproduction, homeostasis, etc. We will need to unpack these ideas into specific operationalizations.
Anthis and his colleagues also found that 48% of respondents agreed with the statement ‘AI is likely to cause human extinction’, and that 72% thought that ‘the safety of AI is one of the most important issues in the world today’, similar to the findings of other polls.
A pause on the development of frontier AI models, as proposed by the Future of Life Institute’s 2023 open letter, is born out of concerns about the existential risk posed by AI. Proponents argue that the more time we have to work on the alignment problem (how to align AI with our goals and values) before the arrival of artificial general intelligence, the less likely it is that said model will kill us all. But, it may also allow for more research on digital sentience, i.e., ‘what can digital minds experience and how can we improve their welfare?’.
Respondents were in favour of both a ban on smarter-than-human AI (63%) and a ban on sentient AI (69%).
‘There is a risk that by raising concerns about the harm AI could do to humanity, we increase the likelihood of human-AI conflict’, argues Anthis. But, he also sees a lot of alignment between AI safety and AI welfare concerns, as ‘both [harms] result from under preparedness and a lack of caution’. Whilst he was happy that the idea of a moratorium is being discussed, he’s sceptical that it would work. Instead, Anthis puts forward the recognition of the rights of all sentient beings (whether biological or digital) as a policy that could be put in place today.
We just see over and over again that moral exclusion leads to conflict, and if we’re producing beings who by design are more powerful than us, antagonizing them just seems like a recipe for disaster. I imagine a baby AGI getting to know humanity through internet texts or other means, and the nature of their mind and their treatment by humans seems like it will play a huge role in many possible trajectories.
38% of respondents thought that it’s possible that future AIs may be sentient, whilst 26% thought it impossible. But regardless of its feasibility, the majority thought that ‘torturing sentient robots/AIs is wrong’ (76%), that ‘sentient robots/AIs deserve to be treated with respect’ (71%), and supported the ‘development of welfare standards that protect the well-being of sentient robots/AIs’ (56%).
Anthis’ previous work on animal farming and moral circle expansion (the increasing number of beings we consider worthy of moral consideration) gives him cause for concern with regards to the potential creation of sentient artificial intelligences. He said in an interview with The Atlantic that ‘for the past 400 years, humanity has been expanding its moral circle. But animals are still fully outside of that circle. And I think the fact that it has taken so long and it continues to move so slowly is a reason for deep concern and caution when it comes to the creation of artificial sentience.’