Venture capital firm Andreessen Horowitz, valued at $42 billion, gave politicians false information in letters sent to the US Senate, President Biden, and the UK’s House of Lords.
Andreessen Horowitz, also known as a16z, has been trying to convince regulators to take a hands-off approach to artificial intelligence. They argue that the risks posed by frontier AI, as described by figures such as Geoffrey Hinton and Yoshua Bengio, are unsubstantiated, and as such, any legislation designed to mitigate said risks would achieve nothing more than to stifle innovation. However, a16z’s campaign has moved beyond simply expressing a differing opinion. It has now crossed into the realm of denying basic facts about the technology.
The letter to Biden, which was also signed by prominent AI risk sceptic Yann LeCun, made the following claim:
“Although advocates for AI safety guidelines often allude to the “black box” nature of AI models, where the logic behind their conclusions is not transparent, recent advancements in the AI sector have resolved this issue, thereby ensuring the integrity of open-source code models.”
This is a blatant falsehood. The same paragraph was included in a letter to the UK House of Lords, and a slightly edited version was in a written statement sent to the US Senate.
Neel Nanda, head of mechanistic interpretability at Google DeepMind, said ‘this is massively against the scientific consensus’.
AI researcher, Joseph Miller, told us that the authors of the letter either ‘do not know what they are talking about or are being deliberately misleading in order to further their own interests’.
Miller, who specialises in mechanistic interpretability at FAR AI, explained that whilst researchers have made some limited progress in understanding the inner workings of neural networks, the issue is very far from being resolved.
“Our current level of understanding is similar to neuroscientists’ understanding of the brain. We can sometimes say that a certain area of the neural network is related to a certain function, but we certainly do not have the tools yet to read an AI (or a human’s mind). There is years or decades more work to do before we can look inside an AI and say that it is not going to do anything dangerous.”
Large language models such like GPT-4 are not designed in the way traditional software is. Instead, they are trained. The training process consists of feeding the model a huge amount of text, and slowly tweaking its parameters until it can competently predict the next word of any given piece of text. At no point in the training process, says Miller, do the humans need to understand the function performed by each parameter. The end result is a powerful model whose inner workings remain mostly a mystery. Because of this lack of understanding, the model may harbour unknown dangerous capabilities, making it impossible to guarantee its safety.
The letter to Biden was in response to his signing of the AI Executive Order, which seeks to place some minimal safety requirements on the most powerful AI models. The executive order has been shown to be popular with voters (79% of Democrats and 64% of Republicans support it), but the authors of the letter think it’s too restrictive.
Read more: Despite strong bipartisan support, Republicans pledge to repeal AI Executive Order
Martin Casado, a general partner at a16z, is alleged to have made a similar claim about the black box issue at a Senate Insight Forum, although his memory fails him. ‘If I did say that,’ Casado replied, ‘I misspoke. I don’t believe that.’
This is despite the above letter he and his colleagues sent to President Biden, which Casado still showcases in a pinned tweet at the top of his Twitter profile.
Another signatory of the letter, John Carmack, claimed he ‘didn’t get to proofread the statement’. He signed a letter sent to the President of the United States, a letter attempting to influence regulation of what could be the most transformative technology ever devised by mankind, and he didn’t even read it. But don’t worry, Carmack said he ‘doesn’t really care’ about the black box problem.
a16z, which has also taken the minority position in opposing SB-1047 (a Californian AI safety bill supported by 59% of voters) have explained their lobbying intentions; ‘If a candidate supports an optimistic technology-enabled future, we are for them. If they want to choke off important technologies, we are against them.’