Abstract
The use of Artificial Intelligence (AI), including the use of chatbots, is common and prevalenceis expected to continue to rise. This paper delves into the creation and deployment of a chatbot named Tessa. Tessa was intended to aid users’ self-assessment of symptoms indicative of eating disorders and guide them towards relevant support services. The chatbot was designed to help ease strain on overburdened healthcare staff and off er support for individuals who may face significant delays in being able to access an in-person medical consultation. Unfortunately, despite a promising start, a recent incident with Tessa demonstrated how chatbots can go wrong. This paper analyses the incident from technical, psychological, and legal viewpoints, with a specific focus on key considerations around responsibility and safeguarding of chatbots within the health domain and the AI Act. This paper contributes to the ongoing discourse on the implications of AI-driven healthcare interventions, fostering a critical dialogue for future developments in this evolving landscape. We support the idea of regular assessments of AI interventions, improved regulation, and more stringent consideration of ethical and safeguarding issues.
Original language | English |
---|---|
Pages (from-to) | 67-75 |
Number of pages | 9 |
Journal | Jusletter IT |
DOIs | |
Publication status | Published - 15 Feb 2024 |
Keywords
- AI,
- chatbots
- eating disorders
- responsibility
- ethics
- risk assessment
- AI