During an interview on Fox News on April 17, Elon Musk told Tucker Carlson that Larry Page, one of the founders of Google, told him he wanted to develop an unparalleled intelligence, a kind of "digital god". A dangerous project for humanity, according to Musk.
"What happens when something much smarter than the smartest person comes up? It's very hard to predict what will happen in such a case. It's called 'the singularity', like with a black hole , because you can't know what happens next. So we should be careful about AI," Musk said.
Taking the example of the FDA which regulates and controls medical and food products in the United States, he added that there needs to be government oversight of companies in the AI sector.
The founder of Tesla says that we tend to regulate after a disaster and that protection standards would come too late, because we are moving towards a world where AI could make decisions for individuals. He explains that he co-founded OpenAI, a non-profit and entirely open organization, which he left before the creation of ChatGPT, chatbot whose woke biases and danger to humanity he never ceases to denounce, so people know what's going on (the name "OpenAi" refers to open source, transparency, and the company became a for-profit company in 2019 after Musk left).
This decision, he assures, was made in reaction to the choice of his friend Larry Page, one of the two founders of Google, a private and for-profit company:
"The reason Open AI exists is because Larry Page and I were close friends and I was at his house in Palo Alto, and I would talk to him late into the night about [the] security issues of the 'AI […] I felt like Larry didn't take AI security seriously enough. He wanted some kind of digital super-intelligence, basically a digital god, if you will, ASAP."
At the time of this exchange with Page, the latter had already recruited "about three-quarters of all artificial intelligence talent in the world" and was taking his digital god project very seriously, according to Musk. The billionaire adds that he told Page that humanity had to be taken into account and that his then friend called him a "speciesist".
This term is mainly used by antispeciesists who do not recognize the barriers between species to determine rights and moral considerations, even if it means considering speciesism as racism.
Religious representations of AI
Musk considers that Page only sees the huge positive potential of AI while refusing to see the potential for harm, whereas "if you have a radical new technology, you want to try to take a set of measures that maximize the likelihood of it doing good and minimize the likelihood of it doing bad things."
This is not the first time that the SpaceX boss has used religious vocabulary. In October 2014, at the centennial symposium of the Department of Aeronautics and Astronautics of the Massachusetts Institute of Technology, he made a comparison between the will to master spiritual evil and the control of AI:
"With artificial intelligence, we summon the demon. You know all those stories where there's the guy with the pentagram and the holy water and he's like... Yeah, he sure can control the demon . It does not work."
Musk isn't alone in using the witty lexicon regarding AI development. In contrast, Anthony Levandowski, co-founder of Google's self-driving car program, created in 2015 the "Way of the Future", a Church venerating artificial intelligence presented as a god.