Elon Musk and a thousand experts call for a pause in AI development to protect humanity

Elon Musk and a thousand experts call for a pause in AI development to protect humanity

As the ChatGPT chatbot continues to evolve and get talked about, hundreds of experts in the field of artificial intelligence express concern in an open letter and ask for a six-month break in research on more powerful tools than ChatGPT4. They state that some AI systems are potentially a serious threat to society and humanity, especially because of the unpredictability of these tools, even for their creators, and the risks for workers.

In recent years, artificial intelligence has developed at a breakneck and disturbing pace, coming dangerously close to the risks highlighted in works of science fiction. Elon Musk, co-founder and former board member of OpenAI, the company that later created ChatGPT, warned on December 3 of the risks posed by the prototype. In a message on Twitter, thethe billionaire wrote :

"ChatGPT is terrifying. We're not far from dangerously powerful AI."

Less powerful than its successor which appeared shortly after, on March 14. On that day, OpenAI launched its latest version of the agent, ChatGPT4, which has more precision, according to the company, and is able to interpret images. The company proposed it by affirming that the agent "is more creative and collaborative than ever. It can generate, edit, and iterate with users on creative and technical writing tasks, such as songwriting, screenwriting, or learning the user's writing style."

At the World Government Summit in Dubai last February, Musk had called to develop "rules or regulations to control the development of AI", making an analogy with transport or medical security.

In the letter of March 28 open for signature and published on the website of the Future of Life Institute, the founder of Tesla and SpaceX, is one of hundreds of voices calling for a pause in AI research. The choice of media is not insignificant, Musk is one of the institute's consultants who makes a profession of denouncing the existential risks that AI poses to humanity and asks to "lead transformative technology for the benefit of life and away from extreme large-scale risks".

If many of the signatories of the letter have a conflict of interest in their criticism of ChatGPT, working in particular in competing companies of OpenAI and Microsoft whose lead is monstrous, their criticisms are nonetheless legitimate and worrying.

Signatories say they believe "humanity can enjoy a thriving future through AI" and speak of "enjoying an AI summer" with systems designed "for the greatest benefit of all" and wish to give "society" a chance to adapt". But this is only possible on the condition of acting as when "society has paused other technologies with potentially catastrophic effects for it".

Concluding the metaphor, they urge a pause based on the precautionary principle:

"Let's enjoy a long AI summer and don't rush unprepared into fall."

A risk for humanity: the absence of control and reflection

This fall could be precipitated by the fact that there is no current possibility of fully controlling AI developments. The authors of the letter state that AI systems competing with humans are likely to pose a profound threat to society and humanity. They point to the exhaustive research "recognized by the main AI laboratories" as proof of this.

In view of the principles developed in 2017 at the Asilomar conference on artificial intelligencee (coordinated by the Future of Life Institute), they say, "advanced AI could represent a profound change in the history of life on Earth, and should be planned and managed with the necessary care and resources."

Despite these principles, "this level of planning and management does not exist". They point out that in recent months, labs have locked themselves "in an uncontrolled race to develop and deploy ever more powerful digital minds that no one - not even their creators - can reliably understand, predict or control".

Such risks are mentioned in works of science fiction which often have the merit of raising ethical questions, particularly regarding the relationship between technology and political power, but also by thinkers such as Jacques Ellul or Georges Bernanos. The first, a historian, sociologist and Protestant theologian, was delighted with the contributions of technology which makes it possible, for example, to travel and "receive images from all over the world":

"So you have a free universe ahead of you."

But the author of the works "The Technician System" or "The technological bluff", estimated that the automobile pushed three million Parisians, for example, to go on vacation each year to the shores of the Mediterranean, like a mass without thinking about its movement:

"It's very difficult in a [technical] society like ours for a man to be responsible."

Ellul mentions the justifications given by the director of the Nazi camp of Bergen-Belsen, during the Nuremberg trials, according to which he had no time to think about the people who were dying, because he was preoccupied with the technical worries of the ovens. Increasingly efficient technological tools mean that, despite their advantages, man himself becomes the instrument of what should have been means.

The Catholic writer Georges Bernanos does not warn less in his premonitory "France against robots", published in 1947, declaring that the main danger is the absence of reflection on the technique, in particular with these words which have become relatively famous:

"The danger is not in the multiplication of machines, but in the ever-increasing number of men accustomed, from childhood, to desire only what machines can give."

Alerts well in advance of the appearance of AI, and of such power reducing man to the background, even to the rank of useless.

A risk for information and the labor market

Bernanos opposed "the civilization of machines [which] is that of quantity" to "that of quality".

A concern expressed by the signatories of the letter who observe the fact that “contemporary AI systems are today becoming competitive on a human level for general tasks”. They pose the fundamental question for the future of democratic society:

"Should we let machines flood our information channels with propaganda and lies? Should we automate all jobs, including the most fulfilling ones?"

This is a warning signal which resembles, in a very updated way, what Paul VI already denounced in his encyclical Populorum Progressio in 1967, recalling that technology should be at the service of man.

In point 20 of the encyclical, the pope underlined that "if the pursuit of development requires more and more technicians, it requires even more wise men of deep reflection, in search of a new humanism, which will allow modern man to find himself, assuming the higher values ​​of love, friendship, prayer and contemplation."

A concern which, in view of the difficulty designers have in perfectly mastering their most powerful AI tools, takes on a particular meaning. Technicians are numerous, but moral reflection has been forgotten too much, leading to a continuation of poorly framed development.

Among the consequences, AI threatens man while the "program made to increase production", says Paul VI in point 20 of his letter, should be there to "free man from his servitudes [on the plan du travail], making him capable of being himself the agent responsible for his material well-being, for his moral progress”.

The encyclical warned with these words about the technicization supposed to enrich society:

"To say: development is in fact to be concerned as much with social progress as with economic growth. It is not enough to increase the common wealth for it to be distributed equitably. It is not enough to promote technology for the earth is more humane to inhabit."

The statement is corroborated by the recent Goldman Sachs study, published on March 26, which indicates that 300 million jobs could be lost worldwide due to the use of AI, i.e. 18% of the mass.

Concerns shared by the authors of the letter who salute Open AI's February 24 statement that "At some point, an independent review may be required before beginning to train future systems." Drawing their conclusion from this declaration, they hear that it remains only a vague promise and write:

“Therefore, we are asking all AI labs to immediately pause, for at least six months, the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key players. If such a pause cannot be put in place quickly, governments should step in and institute a moratorium."

Jean Sarpedon

Image credit: Shutterstock / LookerStudio

In the Science section >

Recent news >