AI Implementation in Mental Healthcare
Welcome to Health Connect, the podcast for health professionals where we will share the latest news and information on science and technology in the medical field. In this episode, we give you an overview of the increasing use of artificial intelligence in mental health care.
How efficient is artificial intelligence in detecting psychiatric disorders? Are the advances ready to jump into the clinical setting? What ethical considerations should we question before implementing them in our daily practice?
According to 2017 figures from the World Health Organization, almost half of the population lives in countries with less than one psychiatrist per 100 000 inhabitants.1
This shortage of specialists in mental health care is one of the main barriers to achieving optimal detection, prevention, treatment, and follow-up of mental disorders. Additionally, the public stigma surrounding psychiatric illness, concerns about costs, lack of awareness, as well as difficulties in recognizing symptoms can also contribute to such barriers.1
So, could the implementation of artificial intelligence in mental health care be a possible solution to some of these problems?1
The diagnosis of mental disorders is known to be complicated by the heterogeneity of clinical manifestations, symptomatology, and fluctuations in disease course, as well as by remaining gaps in our understanding of some etiological mechanisms.2
Digital health tools and technologies offer great opportunities to support and enhance the diagnostic and interventional capabilities in mental health care. Artificial intelligence, or AI, has great potential to transform our understanding of mental disorders and how we diagnose them; this could help make sense of the complex patterns and interactions between genes, brain, behavior, and experience.2
In light of current technological developments, how effective is the performance of AI models in the diagnosis of mental disorders?2
To answer this question, investigators in a study published in NPJ Digital Medicine in 2022, conducted a comprehensive review of 15 previous systematic reviews providing evidence on the performance of AI models in diagnosing eight mental and developmental disorders, which included:2
● Seven studies in Alzheimer's disease.
● Six in mild cognitive impairment.
● Three in schizophrenia.
● Two in bipolar disorder.
● Plus, individual studies in autism spectrum disorder, obsessive-compulsive disorder, post-traumatic stress disorder, and psychotic disorders.
In this review, the performance of the AI models in diagnosing these mental disorders ranged from 21 percent to 100 percent.2
A large proportion of the studies focused on distinguishing Alzheimer's disease, reporting a pooled sensitivity of 92 percent and a specificity of 86 percent for classifying Alzheimer's patients against healthy controls.2
This performance was higher than that for the classification of mild cognitive impairment from healthy controls, which had a pooled sensitivity and specificity of 83 percent.2
Similar results were obtained in models used to classify mild cognitive impairment in patients with cognitive impairment who progressed to Alzheimer's disease against those who did not, with a pooled sensitivity of 73 percent and a specificity of 69 percent. This may be due to the fact that Alzheimer's disease is a neurodegenerative disease and can therefore be viewed as part of a continuum with neurocognitive health at the other extreme.2
For the other disorders included in the review, the results were promising, although variable. It is interesting to note that when artificial intelligence classifiers were trained with neuroimaging, they tended to perform better than with genetic data in distinguishing patients with schizophrenia from healthy subjects; this could be because both genetic and environmental factors have been described as causative factors in this condition.2
Similarly, in patients with bipolar disorder, genetic data alone showed a lower performance, while neuropsychological data appeared to be more reliable than neuroimaging data.2
In contrast, the identification of patients with autism spectrum disorders from healthy controls showed significantly higher sensitivity and specificity when using biochemical features or electroencephalographic measures, while structured magnetic resonance imaging showed higher clustering specificity and sensitivity than functional MRI.2
Likewise, although small sample sizes and high heterogeneity are reasons to treat the results with caution, AI models for discriminating obsessive-compulsive disorder from healthy controls achieved greater efficiency using neuroimaging data.2
Finally, the differentiation of patients at high risk for developing psychotic disorders showed a pooled sensitivity and specificity of 78% and 77%, respectively.2
The evidence suggests that artificial intelligence has great potential to lead to a faster, more accurate, and more objective diagnosis. The authors propose that this technology is on the verge of clinical application.2
Many of the studies included in this review have reported excellent performance. However, it is important to note that much of their effectiveness depends on the correct choice of data modality coupled with the correct algorithms and methods.2
While AI could be a useful tool for the unbiased and objective classification of mental disorders, it is essential that practitioners have a basic understanding of the fundamental aspects and behaviors of this technology.2
Similarly, it is indispensable for researchers to consider the practicality of implementing these tools: Neuroimaging data for AI models seemed to dominate in the systematic reviews included in this review, but as this is a resource-intensive procedure, it may not be practical to incorporate it into routine diagnostic practice.2
In contrast, AI models of neuropsychological, genetic, and electroencephalogram tests may offer exciting opportunities to complement and improve existing diagnostic procedures in mental health care.2
Cápsula
Current evidence suggests that severe pathologies and risky behaviors, such as suicidal behaviors, may not be appropriate for online work.3
Although online work offers clinicians the opportunity to help people who were previously unable to engage in psychotherapy, several studies have addressed the serious consequences of online counseling; particularly, deep learning has been noted as inappropriate for suicidal and psychotic patients. Moreover, there are other limitations of online counseling, one of which is the inability to read non-verbal cues. 3
Fin de cápsula
Welcome back. In the previous section we talked about the potential offered by artificial intelligence applications in mental health care, but there are still significant gaps in our knowledge, such as how these innovations could be implemented and used to add value to healthcare services.1
Nilsen and colleagues published in 2022 a paper based on a selective review aiming to identify the challenges and opportunities of using AI in the field of psychiatry. In this context, the authors argue that AI applications in mental health can be broadly categorized into screening and assessment opportunities.1
Screening refers to the identification of patients most in need of interventions with the help of models that could identify key indicators of mental health. Consider, for example, the work of Coppersmith and collaborators, who in 2018 applied artificial intelligence to social network data to identify individuals at increased risk for suicide; in turn, Schwartz and team in 2021 were able to determine which patients were most likely to benefit from cognitive behavioral or psychodynamic therapy in routine health care settings.1
Furthermore, assessment capabilities could help monitor elements during treatment, such as symptom change, as assessed by Huckvale and colleagues in 2019, which could facilitate better measurement-based care.1
These innovations have the potential to facilitate more reliable diagnoses of psychiatric disorders and to support patients and providers in making more informed shared decisions.1
Despite the great potential of artificial intelligence in psychiatry, its integration still encounters several barriers, including significant skepticism among healthcare providers. Health professionals must be able to adapt to the technology and achieve, in the words of Verghese and collaborators, “a partnership where the machine predicts, and the human explains and decides on actions”.1
Other barriers and challenges include regulatory concerns, issues of responsiveness and accountability, as well as safety assessment.1
Patient trust is another important challenge, as there are concerns about technical aspects that can lead to communication problems and ethical considerations such as privacy, and regulatory aspects.1
For instance, McCradden and colleagues have warned that prioritizing AI in clinical decision-making in psychiatry could have potentially serious implications; for example, some AI predictions could contribute to unnecessary institutionalization, undermine the credibility of patients' accounts of their own experiences, and may even contribute to decisions that eliminate a patient's right to make their own treatment decisions.4
To explain this potential danger, it is essential to take into account that in psychiatry there are subjective factors, such as the patient's testimony, that are essential for the clinician to make management and diagnostic decisions.4
Over the last century, efforts have been made to promote data-driven approaches to clinical judgment, including identifying neuroimaging biomarkers and reconceptualizing psychiatric disorders through new transdiagnostic frameworks, which attempt to eliminate or control for confounding factors, values, and biases.4
Clearly, it is indispensable to better explain psychopathology and to better target and tailor treatments, but identifying reliable biomarkers of psychiatric disorders remains challenging.4
The potential of AI as a catalyst for more reliable taxonomies of mental illness and better predictive models for people with or at risk of developing psychiatric disorders could be groundbreaking, but without appropriate oversight, unintended consequences can occur that have negative implications for care, wellbeing, and patient's rights.4
In addition, some have suggested that AI could have a direct effect on the shared decision-making process in psychiatry. For example, Triberti and team postulate that these effects could manifest in three ways:4
1. Clinical decisions could be delayed or halted if the AI-generated recommendations are difficult to understand or explain.
2. Patient's symptoms and diagnoses could be misinterpreted as clinicians try to fit them into existing AI classifications, and their testimony about their own health could be ignored; in this regard, it is important to understand that many patients do not feel they are able to express their perspectives during clinical encounters.
3. Confusion about whether the algorithmic result or the clinician has the authority over treatment recommendations: Given the power of AI to analyze vast amounts of information per second, clinicians may feel compelled to align their clinical judgments with the supposedly reliable algorithmic results, to the point where, if there is an error, this technology may lead some clinicians to follow its recommendation even against their initial and perhaps, accurate judgment.
While AI could have various applications that would improve the patient experience, such as eliminating racial and ethnic biases that affect psychiatric diagnoses at the individual level, or being potentially scalable so that large numbers of people can be cost-effectively screened; the authors emphasize that uncritical acceptance of AI as being superior to humans in terms of accuracy and knowledge risks entrenching many inequities that have long existed in the field of mental illness.4
As an example of the potential unintended consequences of AI in psychiatric settings, let us consider a hypothetical case of an individual who goes to the psychiatric emergency room seeking treatment for severe suicidality, having a previously diagnosed borderline personality disorder.4
An acute risk prediction model for prioritizing patients for urgent care might predict a low likelihood that this patient is acutely distressed based on the previous diagnosis, despite the patient's request for urgent care.4
While it is true that there are times when the emergency department may not be the appropriate setting for this patient's care needs, that decision should be based on clinical judgment, not just the result of an algorithmic model.4
It is therefore essential that patient testimony should not require validation by an AI system, and that clinicians and developers should work together to help patients understand and communicate their experiences of mental illness in the context of these technologies.4
Frequently, researchers often design AI algorithms for diagnosis and treatment recommendations in isolation, without the need for human intervention, instead of collaborating with psychiatry experts during all stages of implementation and use to improve model performance while increasing the practicality, robustness, and reliability of the process.5
Nevertheless, rather than having to choose between computers and humans, the future of AI in psychiatry is going to be about taking advantage of the best of both, with maximized explanatory power, minimized biases, and keeping the responsibility of the algorithm in the hands of the human experts.5
In this regard, the work of Chandler and colleagues is noteworthy, where they propose to go beyond traditional approaches to implementing machine learning models, since the results tend to be inappropriate for clinical applications and can amplify the biases inherent in the data.5
This study suggests that better outcomes can be achieved by integrating a robust framework that incorporates human-in-the-loop methods in clinical practice. This will result in human clinicians being accountable for the decision-making process.5
In summary, having a humanistic psychiatry practice where AI can be integrated could bring potential benefits to healthcare in general, helping to free healthcare professionals from everyday pressures and burdens, and allowing them to devote more time to their patients and take their testimonies seriously.4
This, in turn, could allow individuals to be more willing to pursue healthcare and overall participate in their care process in a setting where they feel safe.4
Thanks for joining us on this new season of Health Connect. Don't miss out on our next episode.
Discover more medical news on Viatris Connect.
Referências:
- 1. Nilsen P, Svedberg P, Nygren J, Frideros M, Johansson J, Schueller S. Accelerating the impact of artificial intelligence in mental healthcare through implementation science. Implement Res Pract. 2022; Available at: https://journals.sagepub.com/doi/10.1177/26334895221112033
- 2. Abd-alrazaq A, Alhuwail D, Schneider J, Toro C, Ahmed A, Alzubaidi M, et al. The performance of artificial intelligence-driven technologies in diagnosing mental disorders: an umbrella review. Npj Digit. Med. 2022;5,87. Available at: The performance of artificial intelligence-driven technologies in diagnosing mental disorders: an umbrella review
- 3. Zhou S, Zhao J, Zhang L. Application of Artificial Intelligence on Psychological Interventions and Diagnosis: An Overview. Front Psychiatry. 2022;13:811665. Available at: https://www.frontiersin.org/articles/10.3389/fpsyt.2022.811665/full
- 4. McCradden M, Hui K, Buchman DZ. Evidence, ethics and the promise of artificial intelligence in psychiatry. J Med Ethics. 2022;108447. Available at: https://jme.bmj.com/content/early/2022/12/28/jme-2022-108447
- 5. Chandler C, Foltz PW, Elvevåg B. Improving the Applicability of AI for Psychiatric Applications through Human-in-the-loop Methodologies. Schizophr Bull. 2022;48(5):949-957. Available at: https://academic.oup.com/schizophreniabulletin/article/48/5/949/6593215?login
Os links para todos os sites de terceiros são oferecidos como um serviço aos nossos visitantes e não implicam endosso, indicação ou recomendação do Health Connect. Os artigos vinculados são fornecidos apenas para fins informativos e não visam implicar uma atribuição pelo autor e/ou editor. O Health Connect se isenta de qualquer responsabilidade pelo conteúdo ou pelos serviços de outros sites. Recomendamos que você analise as políticas e condições de todos os sites que escolher acessar.
NON-2023-9809
NON-2023-2512