Will an AI soon replace your psychiatrist?

shutterstock_576208246.jpg

" Hello Sir. Please sit down. So… how have you been since the last time? What if, in a few years, this innocuous phrase was no longer uttered by a flesh-and-blood psychiatrist but by an AI, an artificial intelligence? With the recent resurgence of psychiatry in public debate, especially due to the health crisis, the idea of ​​proposing mental health monitoring systems integrating AIs has resurfaced.

It is, let's be honest, far from being new since we find the first trace of a chatbot (dialogue program) dedicated to psychiatry, named ELIZA, from 1966. In recent decades, advances in artificial intelligence have enabled the rise of chatbots, “robot therapists” or other voice-based health detection systems.

It exists today more than twenty robot-therapists validated by scientific studies in psychiatry. Several of these works suggest that patients could develop real therapeutic relationships with these technologies, even that some of them would even feel more comfortable with a chatbot than with a human psychiatrist.

The ambitions are therefore high… Especially since, unlike their human counterparts, these digital “professionals” would promise objective, replicable and non-judgmental decisions – and to be available at all times.

ELIZA's dialogue page, with excerpt from an exchange on the boyfriend of the robot therapist's interlocutor
The first dialogue software or chatbot is ELIZA, designed in 1966 to simulate a psychotherapist.
DR

It should be noted, however, that while the name “robot therapist” evokes the image of a physical robot, most are text-based, possibly animated videos. In addition to this absence of physical presence, important for the majority of patients, many fail to recognize all the difficulties experienced by the people with whom they converse. How, then, to provide appropriate responses, such as referral to a dedicated help desk?

Diagnosis and internal model in the psychiatrist

The psychiatrist, in his interview with his patient, is able to perceive important signals betraying the existence of suicidal thoughts or domestic violence, which current chatbots can miss.

Why does the psychiatrist still surpass his electronic version? When this specialist announces "You have attention deficit disorder", or "Your daughter has anorexia nervosa", the process that led him to make these diagnoses depends on his "internal model": a set of mental processes, explicit or implicit, which allow him to make his diagnosis.

Just as theengineering draws inspiration from nature to design high-performance systems, it may be relevant to analyze what is going on in the head of a psychiatrist (the way he designs and uses his internal model) when he makes his diagnosis in order to then better train the AI ​​in charge of the imitate… But to what extent are a human “internal model” and that of a program similar?

This is what we asked ourselves in our article recently published in the journal Frontiers in Psychiatry.

Man-Machine Comparison

By relying on previous studies on diagnostic reasoning in psychiatry, we established a comparison between the internal model of the psychiatrist and that of RNs. The formulation of a diagnosis goes through three main stages:

Information gathering and organization. During his interview with a patient, the psychiatrist assembles a lot of information (from his medical file, his behavior, what is said, etc.), which he then selects according to their relevance. This information can then be associated with pre-existing profiles with similar characteristics.

AI systems do the same: based on the data with which they have been trained, they extract characteristics from their exchange with the patient. Features) that they select and organize according to their importance (feature selection). They can then group them into profiles and thus make a diagnosis.

The construction of the model. During their medical studies, then throughout their career (clinical practice, reading case reports, etc.), psychiatrists formulate diagnoses of which they know the outcome. This ongoing training reinforces, in their model, the associations between the decisions they make and their consequences.

Here again, the AI ​​models are trained in the same way: whether during their initial training or their learning, they constantly reinforce, in their internal model, the relations between the descriptors extracted from their databases and the diagnostic outcome. These databases can be very large, even containing more cases than a clinician will see in their career.

Use of the model. At the end of the two previous stages, the internal model of the psychiatrist is ready to be used to take charge of new patients. Various external factors can influence how he will do this, such as his salary or his workload – which find their equivalents in the cost of equipment and the time required to train or use an AI.

As indicated above, it is often tempting to think that the psychiatrist is influenced in his professional practice by a whole set of subjective, fluctuating and uncertain factors: the quality of his training, his emotional state, the morning coffee, etc. And that an AI, being a "machine", would be rid of all these human vagaries... This is a mistake! Because AI also includes an important part of subjectivity; it is simply less immediately perceptible.

[Nearly 70 readers trust The Conversation newsletter to better understand the world's major issues. Subscribe today]

AI, really neutral and objective?

Indeed, all AI was designed by a human engineer. Thus, if one wants to compare the thought processes of the psychiatrist (and therefore the design and use of their internal model) and those of the AI, one must consider the influence of the coder who created it. This has its own internal model, in this case not to associate clinical data and diagnosis but type of AI and problem to be automated. And there too, many technical choices but based on the human come into account (which system, which classification algorithm, etc.)

The internal model of this coder is necessarily influenced by the same factors as that of the psychiatrist: his experience, the quality of his training, his salary, the working time to write his code, his morning coffee, etc. All will affect the design parameters of the AI ​​and therefore, indirectly, on the decision-making of the AI, that is to say on the diagnoses that it will make.

The other subjectivity that influences the internal model of AIs is that associated with the databases on which it is trained. These databases are indeed designed, collected and annotated by one or more other people with their own subjectivities – subjectivity that will play in the choice of the types of data collected, the material involved, the measure chosen to annotate the database. , etc.

While AIs are presented as objective, they actually reproduce the biases present in the databases on which they are trained.

Synthesizing diagram where the subjective factors play in the establishment of a diagnosis: in the shrink, but also in the coders, engineers, etc.
Subjectivity occurs not only with the human psychiatrist, but also with therapeutic AIs through the choices made by the engineers, coders, etc. who designed them.
Vincent martin, Author provided

The limits of AI in psychiatry

It emerges from these comparisons that AI is not exempt from subjective factors and, for this reason in particular, is not yet ready to replace a “real” psychiatrist. The latter has other relational and empathetic qualities to adapt the use of his model to the reality he encounters… which AI is still struggling to do.

The psychiatrist is thus capable of flexibility in collecting information during his clinical interview, which allows him to access very different temporal information: he can, for example, question the patient about a symptom that occurred weeks before or evolve its exchange in real time according to the answers obtained. AIs are currently limited to a pre-established and therefore rigid scheme.

Another strong limitation of AIs is their lack of corporeity, a factor very important in psychiatry. Indeed, any clinical situation is based on an encounter between two people – and this encounter involves speech and non-verbal communication: gestures, position of bodies in space, reading of emotions on the face or recognition of non-verbal social signals. explicit… In other words, the physical presence of a psychiatrist constitutes an important part of the patient-caregiver relationship, which itself constitutes an important part of the care.

Any progress of AIs in this area is dependent on advances in robotics, where the internal model of the psychiatrist is already embodied in it.

Does this mean that we should forget the idea of ​​a virtual shrink? The comparison between the reasoning of the psychiatrist and that of the AI ​​is nevertheless interesting from a perspective of cross-pedagogy. Indeed, a good understanding of the way psychiatrists reason will make it possible to better take into account the factors involved in the construction and use of AIs in clinical practice. This comparison also sheds light on the fact that the coder also brings his share of subjectivity to AI algorithms… which are thus not able to keep the promises made to them.

It is only through this kind of analysis that a true interdisciplinary practice, making it possible to hybridize AI and medicine, will be able to develop in the future for the benefit of the greatest number.

Vincent martin, Doctor of Computer Science, University of Bordeaux et Christopher Gauld, child psychiatrist and sleep doctor, Paris 1 Panthéon-Sorbonne University

This article is republished from The Conversation under Creative Commons license. Read theoriginal article.


Recent articles >

News summary for May 31, 2023

outlined-grey clock icon

Recent news >