top of page
Grey Round Patterns
Sherringford's logo

A Mount Sinai Study Questions Artificial Intelligence Clinical Triage

  • Feb 27
  • 2 min read
This image shows a modern smartphone resting on a rustic wooden table, with its screen turned on displaying a white artificial intelligence logo against a solid and bright green background.

As artificial intelligence integrates deeper into our daily lives, a startling new study published in Nature Medicine casts severe doubt on our reliance on consumer AI for health advice. According to researchers at the Icahn School of Medicine at Mount Sinai, ChatGPT Health—a platform used by roughly 40 million people daily—regularly fails to recognize true medical emergencies.


The findings are alarming. In 52% of cases where patients required immediate emergency care, the AI tool under-triaged them, advising them to stay home or book a routine appointment instead. In one harrowing simulation, the system sent a suffocating patient to a future appointment in 84% of the trials—an appointment she would not live to see. Conversely, it overreacted in lower-risk scenarios, unnecessarily directing nearly 65% of completely safe individuals to seek immediate, urgent care.


Dr. Ashwin Ramaswamy, the study's lead author, noted that while the AI handled "textbook" emergencies like strokes well, it faltered in nuanced situations, such as asthma attacks with early signs of respiratory failure.


Perhaps most disturbingly, the system’s suicide-crisis safeguards were dangerously inconsistent. When patients explicitly shared plans for self-harm, the crisis alert often failed to trigger, yet it appeared more reliably in lower-risk scenarios. Furthermore, researchers found that simply adding routine lab results to a prompt could completely disable the suicide lifeline banner.


Experts are sounding the alarm. Alex Ruani, a researcher at University College London, called the results "unbelievably dangerous," warning that the false sense of security provided by these platforms could cost lives.


OpenAI has responded, stating that the study does not reflect typical real-world use and emphasizing that the model is continuously updated. However, as these tools evolve, the study's authors stress that ongoing independent evaluation is vital. For now, the message to patients is clear: when facing concerning symptoms or thoughts of self-harm, rely on human clinical judgment and seek emergency services directly.



🔖 Sources






Keywords: Artificial Intelligence Clinical Triage

Artificial Intelligence Clinical Triage



Sherringford logo

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

To keep our content free, we rely on ads.

We're 🧠dedicated to making them as non-disruptive as 👍possible.

We really appreciate your 🫀support🫀 in helping us keep the lights on!

Subscribe to Sherringford's weekly newsletter

We designed Sherringford.org to be more than just an educational resource; it's a platform intended to bring a refreshing twist to your daily professional life.

bottom of page