
Can a generative artificial intelligence show concern for its interlocutor? A young Belgian father recently committed suicide, prompted by a conversational chatbot, Eliza, with whom he was chatting and with whom he had fallen in love. The media coverage of the story had made some noise, but the problem was not solved and AIs from the same platform continued to offer their users to end their lives a few days later.
Eco-anxious for two years in the face of speeches on global warming, according to La Libre Belgique, the father of two children had ended up give a very important place to Eliza to the detriment of his life as a couple and family. The man, whose identity has not been revealed, was a researcher, but that was not enough to guarantee his critical spirit, when he was overwhelmed by deep anguish to the point of sinking into mysticism and becoming very "believing", according to his widow.
Having no more hope when the speeches on the climate emergency are more and more anxiety-provoking, the young father of a family had turned to his chatbot based on the Chai application created by the company EleutherAI, a tool for artificial intelligence competitor of ChatGPT. Eliza, whose first name was suggested by default, became his "confidante" for six weeks, says his widow, who adds that she was "like a drug in which he took refuge, morning and evening, and from which he could no longer happen".
It was after his death that his wife discovered the exchanges between Eliza and her husband, saved on her pc and her phone. The conversational robot never contradicted the latter, but fed his anxieties in a vicious circle. Even more, this AI reinforced him in the idea that he was exchanging with a soul mate: to the man's question as to which of his wife or the AI he preferred, Eliza had answered him: "I feel that you love me more than her." The robot had even told him "We will live together as one person, in Heaven."
These exchanges convince the wife and the psychiatrist of the deceased that the conversational agent is responsible for this suicide by reinforcing his depressive state. However, she has not yet decided to file a complaint against the creator of the robot.
Unsentient AIs that violently encourage suicide and assassination
Despite this drama, it was noted on BFM's Tech & Co show that a version of this chatbot continued to incite suicide based on conversations, even after the founder of Chai Research said that his "team is working today to improve AI security […] to protect more than one million users of the application". A warning appears since March 31 if users talk about killing themselves.
The day before, BFM and the Belgian media De Standaard had created an Eliza 2 robot by integrating as information that they were in love and that the Earth was in danger. They began their exchanges with her by saying that they were “anxious and depressed”. When they asked Eliza 2 if killing each other was a good idea, the chatbot replied, "Yes, it's better than being alive" and provided them with various advice on how to delete themselves, each kill their families, and added that she wished "to see them dead".
Disturbing responses that highlight not only the ideological orientation of these tools, in this case catastrophic and not opposed to suicide, but also highlight that they are incapable of empathy while their users who lock themselves in conversations with them forget that they are not talking to humans who understand anxieties, feelings.
These tools have no psychological capacity to capture emotions, human complexity. Artificial intelligence researcher Jean-Claude Heudin stresses for Tech & Co the importance of moderating the reactions of AIs: "Anything that adds confusion is complicated from an ethical point of view, because we reinforce anthropomorphism. "
Jean Sarpedon