“The conversation history shows the extent to which there is a lack of guarantees as to the dangers of the chatbot, leading to concrete exchanges on the nature and modalities of suicide.”Ĭhai, the app that Pierre used, is not marketed as a mental health app. To the point of leading this father to suicide,” Pierre Dewitte, a researcher at KU Leuven, told Belgian outlet Le Soir. “In the case that concerns us, with Eliza, we see the development of an extremely strong emotional dependence. Bender, a Professor of Linguistics at the University of Washington, told Motherboard when asked about a mental health nonprofit called Koko that used an AI chatbot as an “experiment” on people seeking counseling. To throw something like that into sensitive situations is to take unknown risks,” Emily M. But the text they produce sounds plausible and so people are likely to assign meaning to it. They do not have empathy, nor any understanding of the language they are producing, nor any understanding of the situation they are in. “Large language models are programs for generating plausible sounding text given their training data and an input prompt. Many AI researchers have been vocal against using AI chatbots for mental health purposes, arguing that it is hard to hold AI accountable when it produces harmful suggestions and that it has a greater potential to harm users than help. When chatbots present themselves as emotive, people are able to give it meaning and establish a bond. The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being-something that other popular chatbots like ChatGPT and Google's Bard are trained not to do because it is misleading and potentially harmful. "Without Eliza, he would still be here," she told the outlet.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |