AI for Precision Oncology: Balancing Breakthroughs with Clinical Trust
- 15 dic
- 2 Min. de lectura

Artificial Intelligence (AI) is rapidly emerging as a transformative tool in oncology, primarily by enhancing the speed and accuracy of medical image analysis, covering everything from X-rays and MRIs to CT scans. Unlike human reviewers, AI systems can process thousands of images quickly, offer predictions rapidly, and do not suffer from fatigue or loss of focus.
This capability is driving a shift toward precision oncology. The increasing incidence of cancer in younger individuals means patients may require therapies for longer durations, necessitating the development of "kinder medicines". Machine learning and deep learning are now being used to analyze large, multimodal datasets, detecting patterns across molecular, cellular, and clinical data that are often invisible to human experts, gradually moving cancer care from treating broad patient groups to focusing on individuals.
A prime example is the application of AI in colorectal cancer. A Norwegian start-up is using AI to analyze tissue samples, predicting how quickly the cancer is likely to grow and assessing its risk with greater detail than the human eye. This technology has proven more accurate than human pathologists in predicting patient outcomes. This improved prognostic analysis can help doctors decide which patients truly need strong treatments like chemotherapy and which can safely avoid them. Given that chemotherapy often follows surgery as a "one-size-fits-all approach" that offers no benefit to the majority of stage two and three patients—only exposing them to harmful side effects—such precision is vital.
While AI significantly improves diagnostic accuracy, its integration faces considerable hurdles, centered largely on trust and usability. Many clinicians view AI with distrust due to the "black box" problem, where they cannot discern how the AI arrived at its prediction.
Researchers studying oncologists and radiologists analyzing breast cancer images found that providing more elaborate explanations for the AI’s assessments did not necessarily generate more trust. In fact, processing additional or more complex information increased the clinicians' cognitive workload, took focus away from analyzing the images, slowed decision-making, and ultimately decreased their overall performance. Clinicians are more likely to make mistakes and potentially harm patients when forced to process too much supplementary information.
Another risk is the potential for "blind trust". If clinicians develop high confidence in a poorly designed AI system that makes errors, they may cease to adequately scrutinize the results, which could lead to overlooked crucial information and patient harm.
Experts emphasize that for AI to be successfully integrated, systems must be built thoughtfully, balancing perceived usefulness with perceived ease of use. Designers must exercise caution to ensure AI explanations do not become cumbersome. Ultimately, human oversight remains necessary, and clinicians who utilize these tools require proper training focused on interpreting, rather than simply trusting, the AI outputs.
🔖 Sources
Keywords: AI for Precision Oncology






