Hripsime Kalanderian, MD Psychiatrist The Vancouver Clinic Vancouver, Washington
Henry A. Nasrallah, MD Professor of Psychiatry, Neurology, and Neuroscience Medical Director: Neuropsychiatry Director, Schizophrenia and Neuropsychiatry Programs University of Cincinnati College of Medicine Cincinnati, Ohio Professor Emeritus, Saint Louis University St. Louis. Missouri
Disclosures The authors report no financial relationships with any companies whose products are mentioned in this article, or with manufacturers of competing products
In a prospective study, researchers at Cincinnati Children’s Hospital used a machine-learning algorithm to evaluate 379 patients who were categorized into 3 groups: suicidal, mentally ill but not suicidal, or controls.24 All participants completed a standardized behavioral rating scale and participated in a semi-structured interview. Based on the participants’ linguistic and acoustic characteristics, the algorithm was able to classify them into the 3 groups with 85% accuracy.24
Many studies have looked at using language analysis to predicting the risk of psychosis in at-risk individuals. In one study, researchers evaluated individuals known to be at high risk for developing psychosis, some of whom eventually did develop psychosis.25 Participants were asked to retell a story and to answer questions about that story. Researchers fed the transcripts of these interviews into a language analysis program that looked at semantic coherence, syntactic complexity, and other factors. The algorithm was able to predict the future occurrence of psychosis with 82% accuracy. Participants who converted to psychosis had decreased semantic coherence and reduced syntactic complexity.25
A similar study looked at 34 at-risk youth in an attempt to predict who would develop psychosis based on speech pattern analysis.26 The participants underwent baseline interviews and were assessed quarterly for 2.5 years. The algorithm was able to predict who would develop psychosis with 100% accuracy.26
Challenges and limitations
The amount of research about applying machine learning to various fields of psychiatry continues to grow. With this increased interest, there have been reports of bias and human influence in the various stages of machine learning. Therefore, being aware of these challenges and engaging in practices to minimize their effects are necessary. Such practices include providing more details on data collection and processing, and constantly evaluating machine learning models for their relevance and utility to the research question proposed.27
As is the case with most innovative, fast-growing technologies, AI has its fair share of criticisms and concerns. Critics have focused on the potential threat of privacy issues, medical errors, and ethical concerns. Researchers at the Stanford Center for Biomedical Ethics emphasize the importance of being aware of the different types of bias that humans and algorithm designs can introduce into health data.28