A Mount Sinai Study Questions Artificial Intelligence Clinical Triage
- Feb 27
- 2 min read

As artificial intelligence integrates deeper into our daily lives, a startling new study published in Nature Medicine casts severe doubt on our reliance on consumer AI for health advice. According to researchers at the Icahn School of Medicine at Mount Sinai, ChatGPT Health—a platform used by roughly 40 million people daily—regularly fails to recognize true medical emergencies.
The findings are alarming. In 52% of cases where patients required immediate emergency care, the AI tool under-triaged them, advising them to stay home or book a routine appointment instead. In one harrowing simulation, the system sent a suffocating patient to a future appointment in 84% of the trials—an appointment she would not live to see. Conversely, it overreacted in lower-risk scenarios, unnecessarily directing nearly 65% of completely safe individuals to seek immediate, urgent care.
Dr. Ashwin Ramaswamy, the study's lead author, noted that while the AI handled "textbook" emergencies like strokes well, it faltered in nuanced situations, such as asthma attacks with early signs of respiratory failure.
Perhaps most disturbingly, the system’s suicide-crisis safeguards were dangerously inconsistent. When patients explicitly shared plans for self-harm, the crisis alert often failed to trigger, yet it appeared more reliably in lower-risk scenarios. Furthermore, researchers found that simply adding routine lab results to a prompt could completely disable the suicide lifeline banner.
Experts are sounding the alarm. Alex Ruani, a researcher at University College London, called the results "unbelievably dangerous," warning that the false sense of security provided by these platforms could cost lives.
OpenAI has responded, stating that the study does not reflect typical real-world use and emphasizing that the model is continuously updated. However, as these tools evolve, the study's authors stress that ongoing independent evaluation is vital. For now, the message to patients is clear: when facing concerning symptoms or thoughts of self-harm, rely on human clinical judgment and seek emergency services directly.
🔖 Sources
Keywords: Artificial Intelligence Clinical Triage










Comments