Chatbots and Mental Health Care

When the term chatbot is mentioned, the first thing that might come to mind are virtual assistants, such as Siri and Alexa, that live on most smart devices. Both chatbots and virtual assistants use artificial intelligence to do their job, but there are still differences between the two.

According to a 2018 Forbes article, a chatbot is a program usually designed to perform one specific task, unlike virtual assistants that utilize multiple tasks and become familiar with our routines. Similar to digital assistants, chatbots can use both text and vocal commands, but texting tends to be used for chatbots since it allows the user to type what they want instead of having to say it out loud.

Chatbots are most commonly used in customer service to answer simple questions. These resources enable companies to serve multiple consumers at once, and provide straightforward answers without having to put the consumer on hold. Other uses for chatbots include ordering food, arranging travel accommodations and checking the weather forecast.

So since chatbots can help consumers with tasks like these, can they assist in the mental health field? Is it possible for consumers to use chatbots to help improve their mental wellness?

Mental health chatbots have been around for a few years. One 2018 Healthline article reviewed four different mental health chatbots, accessible through smartphone apps and on a computer. The chatbots are usually each programmed to offer treatment in different areas of mental health, and start with a survey of questions to help tailor what advice they can offer to the individual patient. Some will ask the patient simple questions each day to help them figure out how they are feeling. Some chatbots do work with human mental health professionals, although these programs tend to cost more.

While chatbots can offer potential solutions with accessibility and low costs, there are still some concerns on the further development needed. A December 2018 Forbes article highlighted the need to ensure that chatbots don’t have unconscious bias. The common perception is that chatbots are neutral and do not have any bias, when they are in fact built and programmed by people that might have existing prejudices.

Another concern the Forbes article discusses is privacy and trust. One study found that since patients know they are talking to a robot, they are more likely to reveal more personal information. Talking to an anonymous machine can sound liberating since it’s not an actual person, but once the information is out, comes the question of whether the data will be kept private or shared with other people or programs.

Looking at how technology can further aid in future mental health treatment, the National Institute of Mental Health offers extensive research on this. According to their official website, technology can help improve accessibility, lower costs, offer an easier introduction to treatment and create the potential of 24-hour treatment. NMHI outlines several health applications and chatbots that can offer assistance in several areas of life such as life management and critical thinking. However, since these applications haven’t been thoroughly regulated, NIMH recommends that patients can work with a real mental health care provider to determine whether a technology program can be part of a treatment.

Add a Comment

Your email address will not be published. Required fields are marked *

| Disclaimer