Saying ‘Artificial Intelligence has been impacting all aspects of life’ has now become drab. Today, it is a known fact to everyone right from children to ordinary people and researchers. Apart from the impact, the fact that AI is here to make our lives better and not replace us is also something people should start accepting. A lot of paranoia is created around AI and how it is getting integrated with other day-to-day activities and might affect the job scenario. Still, there is one sector that doesn’t believe in the common notion, and that is the healthcare sector, particularly psychiatry. According to a report, just 4% of the Psychiatrists think AI will take their jobs from them, but the others do believe that whatever intervention artificial intelligence has with it, it will be for good.
This week a paper was published by Chelsea Chandler, Peter W Foltz and Brita Elvevag, where they plan on building trust between the psychiatric community and artificial intelligence. They tried doing this by creating an AI- and ML-based application that helps pick up on the signals and patterns that aren’t apparent to the human eye, in this case, the psychiatrist.
To Detect Schizophrenia
One of the major disorders in the world of psychiatry is schizophrenia. It affects how people think, feel and perceive. The new app-based technology monitors and tracks subtle changes in the speech patterns of patients and looks for changes in them. According to one of the researchers, the changes reported by the app gives insights to the patients’ fluctuations in speech and mental health, which helps discover shifts in mood or thoughts.
In one of their studies, they compared 120 healthy patients and 105 patients with stable mental illness where both these groups were told to retell stories one day after hearing them in a laboratory in the same settings.
These stories were put through their automatic speech recognition tools. These tools tracked the speech longitudinally and extracted some language-based features.
The results were put up against the clinicians’ evaluations, and it was found that the accuracy of the AI models was as accurate as of the clinicians’ evaluation when it came to identifying patients showing schizophrenia symptoms and differentiating them with healthy patients.
Building Trust In the Community: In the paper, the researchers hope to establish trust between AI’ capabilities and the psychiatric community. They highlight three goals for this, explainability, transparency and generalisability.
People often don’t know how artificial intelligence works, which becomes a problem when it comes to clinicians thinking about operating AI with their field. The AI tools or programs should come along with clear information on how it was built and should contain the type of data that was used to train the program. This would account for transparency.
For explainability, they say that clinicians should be given as much information as possible to know how the system has come up with the assessment, because of this, the clinician will be able to understand how and where exactly should the system and app be used.
As for generalisation, the system needs to be trained so that it must have a minimal number of biases happening. The system must work for many other mental conditions and must have a generalised assessment of any kind of population it is trained on.
Machine Learning Approach to Treating Depression
In the present systems, clinicians do not have a reliable mechanism to assess whether a patient with depression will respond to a particular type of antidepressant. Generally, the antidepressant does not produce the expected results it is supposed to, so in an attempt to counter this problem, a team of researchers in 2016 put forward a study where they came up with an algorithm to assess whether patients with depression will achieve symptomatic remission when put through 12 weeks of escitalopram (an antidepressant).
They used data from Sequenced Treatment Alternatives to Relative Depression (STAR*D) trial, to identify variables that were most predictive of treatment outcome and used these variables to train the ML model to predict clinical remission. This algorithm consisted of 25 predictive variables including sociodemographic features, number of previous major depressive episodes and scores on depressive severity.
The model, after being trained to predict clinical remission, was validated by applying it to patients in the study who were on escitalopram from an independent clinical trial known as CO-MED study.
The model’s accuracy, as said by the researches, was significantly ‘above chance’ (64.6%) and was externally validated in CO-MED escitalopram treatment group.
Like these, there have been many other studies, which used a similar principle and came up with ML solutions that analyse various depressive symptoms.
The healthcare community widely believes that AI will significantly change their profession for good. The psychiatric field, in particular, will benefit from it because the mental health issue is faced by everyone and often goes unnoticed by the patient. Mental illness affects everyone the same, be it for people living in urban areas or rural areas. App-based solutions like the ones mentioned above will make it more accessible for people of all lifestyles. Although complete solutions are not yet available, the promise that AI shows in the psychiatric world is real.
Subscribe to our NewsletterGet the latest updates and relevant offers by sharing your email.
Sameer is an aspiring Content Writer. Occasionally writes poems, loves food and is head over heels with Basketball.