
AI System Enhances Suicide Risk Detection and Prevention in Routine Medical Care
Jan 6
2 min read
0
4
0
A groundbreaking study from Vanderbilt University Medical Center has demonstrated how clinical alerts powered by artificial intelligence (AI) can help doctors identify patients at risk for suicide, potentially transforming prevention efforts in routine medical settings. The research team, led by Dr. Colin Walsh, Associate Professor of Biomedical Informatics, Medicine, and Psychiatry, tested the Vanderbilt Suicide Attempt and Ideation Likelihood model (VSAIL) in three neurology clinics. This AI system analyzes routine data from electronic health records to calculate a patient's 30-day risk of a suicide attempt, offering a vital tool for healthcare providers.
The study explored two approaches to alerting doctors about high-risk patients: interruptive pop-up alerts that directly interrupted the physician's workflow and passive alerts that displayed risk information in the patient's electronic chart. Results revealed that interruptive alerts were far more effective, leading doctors to conduct suicide risk assessments in 42% of cases, compared to just 4% with the passive system.

"Most people who die by suicide have seen a healthcare provider in the year before their death, often for reasons unrelated to mental health," said Dr. Walsh. "But universal screening isn't practical in every setting. We developed VSAIL to help identify high-risk patients and prompt focused screening conversations."
Suicide remains a significant public health challenge in the United States, claiming an estimated 14.2 lives per 100,000 people each year and ranking as the 11th leading cause of death. Studies show that 77% of individuals who die by suicide have contact with primary care providers within the year before their death, underscoring the urgent need for better risk screening methods.

In earlier testing, the VSAIL model proved effective at identifying high-risk patients, with one in 23 flagged individuals later reporting suicidal thoughts. The system's design aims to identify only a tiny, select percentage of patient visits—about 8%—for screening, making it feasible for implementation in busy clinical environments.
The recent study at Vanderbilt's neurology clinics, where certain neurological conditions are associated with increased suicide risk, involved 7,732 patient visits over six months and triggered 596 total screening alerts. Doctors were randomly assigned to receive either interruptive or non-interruptive alerts. During the subsequent 30-day follow-up period, no patients in either alert group were found to have experienced suicidal ideation or attempted suicide.

While interruptive alerts proved more effective in prompting screenings, they raised concerns about "alert fatigue," where doctors may become overwhelmed by frequent notifications. "Healthcare systems need to balance the effectiveness of interruptive alerts against their potential downsides," said Dr. Walsh.
The researchers suggested that systems like VSAIL could be adapted for use in other medical settings, potentially broadening their impact on suicide prevention efforts. "These results suggest that automated risk detection combined with well-designed alerts could help us identify more patients who need suicide prevention services," Dr. Walsh concluded.
This innovative approach highlights the potential of AI-driven clinical tools to address one of the most pressing challenges in public health. By selectively identifying high-risk patients and prompting timely interventions, systems like VSAIL could save lives and make suicide prevention efforts more effective in everyday healthcare environments.