Advancements in machine learning (ML) and artificial intelligence (AI) technologies have demonstrated emerging capabilities and opportunities in healthcare amid the Covid-19 pandemic. While its detection, possible treatment and prevention is widely spoken about, a topic that is seldom discussed is its mental health ramifications.
The mental harm of not being able to experience a normal life during lockdowns has been afflicting many. Furthermore, the added stress of ambiguous employment terms and an oncoming recession has triggered mental health issues and mood disorders among people. And with stringent social distancing protocols in place and existing healthcare services disrupted, individuals are seeking solutions that can be available anytime, eliminating the need to book appointments in advance.
This is where the potential AI is being tested to create low-cost digital interventions and make behavioral health facilities more accessible – and even convenient for users.
Potential Of Emotional AI
Although digital mental health has been a firmly established field with many startups offering relevant services, these AI and data-driven efforts need to be continued not just in self-care, but also in areas such as clinical decision-making, diagnosis, treatment, and more.
Emotion artificial intelligence (emotion AI) enables healthcare practitioners to use data to effectively monitor their patients for critical cues hidden in their voices and facial expressions, and fill information gaps to provide appropriate treatment. This will also help them flag patients who are susceptible to engaging in suicidal thoughts.
Although still at a nascent stage, research conducted by Vanderbilt University can help doctors understand the factors that drive this kind of behaviour. The researchers created an AI model that seeks to capture suicidal phenotypes to enable a deeper knowledge of the genetic architecture of suicidal thoughts, and the risk factors that contribute to it.
For instance, a virtual therapist named Ellie was launched to treat veterans experiencing post-traumatic stress syndrome (PTSD). As part of the University of Southern California’s SimSensei project, it catches crucial nonverbal cues which can be difficult to pick up. Ellie analyses multisensory information like facial expression and gestures to help assess a user, and thereby, enable appropriate treatment.
Emotional AI need not be the mainstay of healthcare practitioners alone. With social networking prevalent among millennials and widely used to express feelings, companies like Facebook and Twitter can use it to monitor content that could signal a user’s suicidal tendencies early on and report that to authorities proactively. In fact, the former had rolled out an AI-powered software in 2017 that helped detect suicidal posts before they get reported to authorities.
While emotional AI holds a lot of promise to develop better AI models to broaden access and improve available mental healthcare, chatbots have emerged as a popular medium to deliver personalised therapy to users.
Role Of AI Chatbots In Mental Healthcare
Talking to a computer for a therapeutic experience, although not mainstream yet, is picking up quickly. With many AI-based mental health apps in the market today, interacting with a chatbot powered by emotional AI has been known to provide significant benefits to users.
Chatbot technology can serve a number of functions for users when it comes to seeking relief for mental illnesses. For instance, While BioBase can help measure and manage stress, Wysa enables its users to track their mood and get access to effective mindfulness podcasts to help them stay on course.
Advances in natural language processing (NLP) have also enabled chatbots to play a crucial role in mental health care. One of the oldest examples dates back to the 1960s with a program led by MIT AI Laboratory. Serving as one of the earliest examples in chatbot history, the Eliza program simulated conversation by using pattern matching and substitution methodology, and was concluded as being capable of engaging in health-related discourse.
A popular chatbot – and a Google Play award winner of 2019 – is Woebot. Resembling an instant messaging service, it aims to replicate a conversation between a real-life therapist and a patient. It helps users reflect on their moods with step-by-step guidance using tools from Cognitive Behavioral Therapy (CBT).
Tailored to an individual’s private situation, this AI-powered chatbot has made CBT more accessible to people. Another chatbot that goes by the name Ginger.io uses ML to provide emotional support to users by offering 24/7 online CBT. What is more, this app integrates inputs from clinicians and therapists, making it truly reliable and collaborative. And on the back of its ML capabilities, it will constantly improve its offerings, making it more effective in the long run.
While some of these apps are open to a wider user base, some target a niche audience. Solutions to help students develop better coping mechanisms amid exams is one such example. Moreover, with schools and universities shuttered to mitigate the spread of Covid-19, and some companies rescinding job offers to graduates, students have been put under extreme pressure.
This, coupled with the need to constantly be productive while confined at home, can have a negative impact on their health. Timely counselling enabled through AI-driven solutions which are readily available can be greatly helpful. What is more, some of these chatbots may also come with different language options.
The current digital interventions in mental health have shed a spotlight on how AI can supplement conventional means of therapeutic delivery, while helping individuals have a better control over their health. Although the long-term efficacy of AI for mental health is yet to be determined, its potential to transcend geographical and financial boundaries are promising.
What is more, AI-based mental healthcare systems anchored around virtual therapists will also help reduce some of the stigma that comes with mental illnesses, and encourage more people to get help.
Nonetheless, its benefits need to be carefully balanced against its limitations.
Since these chatbots follow a predefined script, their response is largely dependent on the depth and breadth of the data set used to train them. This can also mean that these machine-assisted conversations could also be biased, if not checked. Thus, even if emotion AI models are getting increasingly better at understanding complex emotions, it should ideally be used in conjunction with conventional means.