Generative AI Overreliance Threatens Doctors' Critical Thinking in Medical Education
- 1 day ago
- 3 min read

The rapid adoption of artificial intelligence (AI) tools in medicine, particularly generative AI (GenAI), presents a profound paradox: while offering burgeoning potential for vast arrays of tasks, it simultaneously poses serious threats to the foundation of sound medical practice. Experts are issuing urgent warnings that overreliance on these powerful tools risks eroding critical thinking skills among new and future doctors, while also potentially reinforcing existing data bias and inequity. This concern is magnified by the fact that GenAI tools are already being widely used despite limited institutional policies and regulatory guidance.
The primary worry centers on how novice learners—medical students and trainee doctors who are still acquiring fundamental skills—will develop the necessary clinical judgment when sophisticated AI systems provide readily available answers. This dependence leads to several specific pitfalls:
One major risk is automation bias, defined as an uncritical trust in automated information after extended use. This ties directly into cognitive off-loading and the outsourcing of reasoning, where students shift the critical tasks of information retrieval, appraisal, and synthesis to the AI, thus undermining memory retention and true critical thinking. This effect also contributes to deskilling, the blunting of essential abilities, which is particularly detrimental for those who lack the experience required to probe and challenge the AI’s advice.
Further complicating the landscape are issues inherent to GenAI itself, including the creation of hallucinations—fluent and plausible but ultimately inaccurate information. The tools can also fabricate sources and encode bias, leading to negatively disruptive effects on the educational journey. Additionally, the sensitive nature of healthcare data makes breaches of privacy, security, and data governance a significant concern.
In response to these risks, authors from the University of Missouri, Columbia, USA, emphasize that medical education must exercise vigilance and adjust curricula to mitigate the technology's pitfalls. Curricular adjustments should include enhanced critical thinking teaching, perhaps through cases where AI outputs contain a mix of correct and intentionally flawed responses, forcing learners to accept, amend, or reject the advice and justify their decisions with evidence-based sources.
Furthermore, educational assessments need serious modification. The authors suggest grading the process of learning rather than solely the end product, assuming that students will have utilized AI. They also advocate for designing critical skills assessments that explicitly exclude AI, utilizing supervised stations or in-person examinations for skills crucial to patient care, such as bedside communication, physical examination, teamwork, and professional judgment.
Crucially, AI literacy itself should be evaluated as a competency. Trainees must understand the principles underpinning AI’s strengths and weaknesses, know how to integrate these tools into clinical workflows effectively, and be able to evaluate the tools' performance and potential biases over time. Regulators and professional societies globally are urged to play their part by producing and updating guidance on the impact of AI on medical education.
Ultimately, while Generative AI offers documented benefits, medical programmes must remain vigilant and proactively adjust their training to mitigate the likelihood of these significant risks. The preservation of sound critical thinking is paramount to patient safety and the future integrity of the medical profession.
🔖 Sources
Keywords: Generative AI







