AI: has the goal become the means?


In their work,Artificial intelligence is not a technological question"(Éditions de l'Aube), Laurent Bibard, Head of Management and Philosophy at ESSEC, and Nicolas Sabouret, Professor at the University of Paris-Saclay where he directs the graduate school of Computer Science and Digital Sciences, emphasize that "there is no problem with artificial intelligence, there is only the problem of our expectations of what we ourselves have created". Demonstration with an excerpt that we repost on The Conversation France.

Even if we find it difficult to admit it, the tendency is rather tech-savvy in our society. Despite certain resistance expressed through the demand for "less technology" and "more people", the general trend is that society is becoming more and more "technologized", without necessarily reflecting collectively on the reasons for which it the fact. And that leads us to this kind of situation where we renew the machines very often and where we calculate at all costs, therefore in generating a lot of pollution. [...]

It is necessary to distinguish, in these calculation tools which are very expensive and which pollute, on the one hand, the calculations which are made with a collective aim of progress for society - whether by researchers or by companies which try to work for the common good –, on the other hand, the individual uses which can sometimes come under this modern slavery.

Fashion and diffusion of photos of kittens on Social networks are very costly from an ecological point of view, for an economic gain that is based solely on advertising revenue. On the contrary, during the Covid-19 pandemic, computing centers made it possible to simulate and understand the modes of dissemination of the disease, to follow the evolution of variants, in an undeniably efficient way.

In both cases (advertising and research on Covid-19), all of this is made possible by the algorithms ofartificial intelligence (IA). So these are also these algorithms, the use of which we criticize on social networks, which have enabled us to emerge from the pandemic in two years, with a number of victims which, despite everything, remains limited – ten times less, for example, than the Spanish flu of beginning of the XXe century. Thanks to technology, we were able to control a potentially catastrophic situation relatively well. This was done with these very polluting computer centers […].

We must therefore make the difference between the photo of a kitten that we are going to put on the Internet and technologies made available in the collective interest of the common good, to problematize the question of the relationship to technologies directly on the level of moral and political philosophy. This shows again here that, by themselves, the technologies do not pose a problem. It's the relationship we have with them that poses a problem, in relation to our expectations, in relation to conformation to groups, in relation to fashion, and so on.


We can, as we have just seen, problematize the question on the level of moral and political philosophy, but we can also problematize it on the level that can be described as "epistemological", that is to say say which concerns the solidity of our knowledge. There is a very fundamental reflection on this subject by a little-known philosopher named Jacob Klein. He observes that from the Renaissance, the development of mathematical physics on the basis of algebra leads the sciences to inevitably aim for their own methods. He writes something like this:

"The ancients would never have taken the method as their goal."

He means by this that the elaboration of modern sciences has as an underlying backing the idea that the method is a goal. The idea that the mode of operation is the goal, in a way, of research. And if there is some truth in what he says, it is essential to know how to get away from it in order to use research only as a means to an end other than research itself, which is not not made to serve itself, but to serve life in society, the common good.

This translates into our concrete lives, into social life, through what in sociology is called functionalism. Functionalism is a way of approaching organizations that very convincingly identifies that an organization always risks taking itself for its own purpose.

The great disempowerment

There is a remarkable example of this in the very beginning of the film Flight over a cuckoo's nest by Miloš Forman, with Jack Nicholson. The scene shown by Forman at the beginning of the film is indicative of the difficulty we are dealing with here. It takes place at a time when patients in a psychiatric hospital are supposed to be resting. And we see that the care staff, very subtly, instead of calming the patients, annoys them. Thus, the caregivers are again useful: we even call the stretcher-bearers to put Nicholson in his straitjacket.

Movie trailer Flight over a cuckoo's nest (1975)

We laugh about it because it is a film, but that is unfortunately tragically true, and happens more than frequently in all contexts: instead of facilitating the functioning of the structure, of putting themselves at the service of users, the actors act in such a way as to remain useful, that is to say that their functioning is confirmed in its relevance. In other words, we maintain the problem, because we are the answer to it. And it may be that the problem we are discussing has lost its relevance, or even become obsolete, has no meaning, or even never had one...

"Artificial intelligence is not a technological question", by Laurent Bibard and Nicolas Sabouret. Dawn Editions

System distortions of this order are a real problem in our world. And technologies are an increasingly central cog in the wheel. They become an end in themselves to improve the computing capacities of computers, AI, etc., regardless of the interest that this may bring to humans, to society, to the common good.

We are therefore, once again, faced with a fundamentally human problem. One of the interesting ways of putting it is to say that when we take refuge in theories of systems, by not admitting that there is a responsibility of individuals and therefore by presupposing that humans and individuals are drowned in the systems, everyone is relieved of responsibility, by de facto playing the game of a humanity drowned in the systems, in any system.

Theories that insist only on the systemic aspect of functioning contribute to our alienations, to the fact that we become cogs in machines. We become the means of the machines because we imagine that we are dominated by the systems.

Laurent Bibard, Professor of Management, holder of the Edgar Morin Chair in Complexity, ESSEC et Nicolas Sabouret, computer teacher, Paris-Saclay University

This article is republished from The Conversation under Creative Commons license. Read theoriginal article.

Image credit: Shutterstock / SomYuZu

In the category Society >

Recent news >