Is there intelligence in artificial intelligence?

Almost 10 years ago, in 2012, the scientific world marveled at the feats of deep learning (the deep learning). Three years later, this technique allowed the AlphaGo program to defeat the champions of Go. And some got scared. Elon Musk, Stephen Hawking et Bill Gates worried about an imminent end of humanity, supplanted by artificial intelligences escaping all control.

Wasn't that a bit of a stretch? This is precisely what AI thinks. In an article he has written in 2020 in The Guardian, GPT-3, this gigantic neural network with 175 billion of parameters explains:

“I'm here to convince you not to worry. Artificial intelligence is not going to destroy humans. Believe me. "

At the same time, we know that the power of machines keeps increasing. Training a network like GPT-3 was literally unthinkable just five years ago. It is impossible to know what his successors will be capable of in five, ten or twenty years. If today's neural networks can replace dermatologists, why don't they end up replacing us all?

Let's ask the question backwards.

Are there human mental skills that remain strictly beyond the reach of artificial intelligence?

We immediately think of skills involving our “intuition” or “creativity”. No luck, the AI ​​claims to attack us in these areas as well. As proof, the fact that works created by programs have sold very dearly, some reaching almost half a million dollars. On the music side, everyone will of course have their own opinion, but we can already recognize acceptable bluegrass or almost Rachmaninoff in the imitations of the MuseNet program created, like GPT-3, by OpenAI.




Read also :
When AI speaks out: from prowess to danger


Will we soon have to submit with resignation to the inevitable supremacy of artificial intelligence? Before calling for revolt, let's try to see what we are dealing with. Artificial intelligence is based on several techniques, but its recent success is due to only one: neural networks, especially those of deep learning. However, a neural network is nothing more than a machine to be associated. The deep network that talked about him in 2012 associated images: a horse, a boat, mushrooms, with the corresponding words. Not enough to cry genius.

Except that this association mechanism has the somewhat miraculous property of being "continuous". You present a horse that the network has never seen, it recognizes him as a horse. You add noise to the image, it doesn't bother him. Why ? Because the continuity of the process guarantees that if the input to the network changes a little, its output will also change little. If you force the still hesitant network to go for its best answer, it probably won't change: a horse remains a horse, even if it is different from the learned examples, even if the image is noisy.

Making associations is not enough

Good, but why say that such associative behavior is "intelligent"? The answer seems obvious: it can diagnose melanoma, grant bank loans, keep a vehicle on the road, detect pathology in physiological signals, and so on. These networks, thanks to their power of association, acquire forms of expertise that require years of study from humans. And when one of these skills, for example writing a newspaper article, seems to hold out for a while, it suffices to make the machine swallow up even more examples, as was done with GPT-3, for the machine begins to produce convincing results.

Is it really that to be smart? No. This type of performance represents only a small aspect of intelligence at best. What neural networks do is like rote learning. This is not, of course, since these networks fill by continuity the gaps between the examples which have been presented to them. Let's say it's almost-by-heart. Human experts, whether they are doctors, pilots or Go players, often don't do anything else when they decide reflexively, thanks to the large amount of examples learned during their training. But humans have many other powers.

Learn to calculate or reason over time

A neural network cannot learn to calculate. The association between operations like 32 + 73 and their result has limits. They can only reproduce the strategy of the dunce who tries to guess the result, sometimes falling right. To calculate is too difficult? What about an elementary IQ test like: continue the sequence 1223334444. Association by continuity is always of no help to see that the structure, n say again n times, continues with five 5. Still too difficult? Association programs cannot even guess that an animal that died on Tuesday is not alive on Wednesday. Why ? What are they missing?

Modeling in cognitive science has revealed the existence of several mechanisms, other than association by continuity, which are all components of human intelligence. Because their expertise is fully precomputed, they cannot reason in time to decide that a dead animal remains dead, or to understand the meaning of the sentence "he is still not dead" and the oddity of this other sentence: "he is not always dead". The only predigestion of large amounts of data does not allow them to identify new structures so obvious to us, like the groups of identical numbers in the sequence 1223334444. Their almost-by-rote strategy is also blind to unpublished anomalies.

The detection of anomalies is an interesting case, because it is often through it that we gauge the intelligence of others. A neural network will not "see" that the nose is missing from a face. By continuity, he will continue to recognize the person, or perhaps he will confuse him with another. But he has no way of realizing that the absence of a nose in the middle of the face is an anomaly.

There are many other cognitive mechanisms that are inaccessible to neural networks. Their automation is the subject of research. It implements operations performed at the time of processing, where neural networks simply perform associations learned in advance.




Read also :
In the hidden brain of artificial intelligence


With a decade of hindsight on deep learning, the informed public is beginning to see neural networks much more as “super-automatisms” and much less as intelligences. For example, the press recently alerted to the astonishing performance of the DALL-E program, which produces creative images from a verbal description - for example, the images that DALL-E imagines from the terms "d-shaped armchair. 'lawyer', on the website OpenAI). We are now hearing judgments that are much more measured than the alarmist reactions that followed the release of AlphaGo: "It's quite amazing, but we must not forget that this is an artificial neural network, trained to accomplish a task ; there is no creativity or any form of intelligence. "(Fabienne Chauvière, France Inter, January 31, 2021)

No form of intelligence? Let's not be too demanding, but let's be clear about the huge gap between neural networks and what a real artificial intelligence would be.


Jean ‑ Louis Dessalles wrote “Very artificial intelligences” for Odile Jacob editions (2019).

Jean Louis Dessalles, Lecturer, Mines-Telecom Institute (IMT)

This article is republished from The Conversation under Creative Commons license. Read theoriginal article.

© Info Chrétienne - Short partial reproduction authorized followed by a link "Read more" to this page.

SUPPORT CHRISTIAN INFO

Info Chrétienne being an online press service recognized by the Ministry of Culture, your donation is tax deductible up to 66%.