Medical chatbots – artificial intelligence systems which can communicate orally or in writing – have increasing knowledge and learn at a rapid pace. In order to check the symptoms of a disease or ask how to take care of our health, all we need is a special mobile app. Example: Ada App was created by doctors, scientists and engineers, and we can chat with it as smoothly as with a friend. When we feel unwell, Ada is able to make an initial diagnosis anywhere and at any time. For those who prefer conversation to typing, there is a version available for voice devices.
The AI systems can imitate human speech with such precision that it is sometimes difficult to recognise them as robots. This is why when beginning a chat with a Woebot chatbot, we are informed that it is an artificial intelligence system and not a doctor. Woebot cares for our mental health based on the basics of cognitive-behavioural therapy. It can amuse us, it can express its emotions, and it has impressive knowledge. It is not able to answer all questions yet, however, considering the current pace of technological development, it will be impossible to distinguish a chat with an artificial intelligence system from a conversation with a real-life doctor in a few years.
Health care will get a new type of assistance – a simple tool for medical consultation in every person’s pocket
Health care will get a new type of assistance – a simple tool for medical consultation in every person’s pocket. Instead of going to the doctor with the most trivial problems, all we have to do is launch an application. As a result, queues will diminish, and doctors will have time for patients who truly need help. It will be possible to make a diagnosis much quicker, which may be imperative for our health and even our lives, in case of some diseases. Rather than looking for advice using Google, we will obtain professional help at the touch of a button. We will be able to get psychological assistance at home, and patients will not have to be afraid of being stigmatised. Medical help will be available at any time and for everyone.
In a few years’ time, we will be unable to distinguish a doctor from a robot during a conversation. The AI systems will reach perfection not only in terms of messages conveyed, but they will also become similar to humans in order to express emotions that are so important in doctor-patient communication. This is where a series of ethical questions begins. Should humans be only treated by humans? Who will be held responsible for the information provided by artificial intelligence systems which result in the patient making wrong decisions or not seeking treatment of a serious disease?
Doctor’s appointments are a multi-layered form of communication where making diagnoses and dispensing medication is only a part of the whole process. Other equally important elements are understanding, providing psychological support, motivating patients to change their habits or to fight a disease, as well as talking with the family. For some people, a medical appointment has a soothing effect and is a form therapy in its own right. What is important is the psychological aspect, and doctors seen as authority figures.
Chatbots cannot see people, they are not able to observe their reactions, facial expressions, their general physical condition or behaviours. Based on these factors, doctors are able to gain a great deal of valuable information, apart from those provided by the patients. Due to malaise, stress or ambiguous symptoms, patients are not always able to describe their health status objectively or explain their symptoms accurately. Chatbots work on a Boolean basis and may overlook such nuances. It may also turn out that such solutions will only be available for those well-versed in the world of new technologies. Health care is under threat of becoming overly technologised and depersonalised. Chatbots will not hold a patient’s hand or hear their other worries. In the name of economies, they will reduce them to data sets to be processed in diagnoses, a mathematical formula.
For or against?
Join our discussion on Twitter .