Home Artificial intelligence Will AI Eventually De-Skill Doctors? The Evidence Is Trickling In
Artificial intelligence

Will AI Eventually De-Skill Doctors? The Evidence Is Trickling In

Share


Over the last several years, artificial intelligence (AI) is integrating into the fabric of clinical medicine. Today, AI drafts chart notes for doctors, performs preliminary radiology reads and flags high-risk patients. When asked about a patient case, AI can generate a broad differential diagnosis and proposed treatment plan in seconds with linkages to guidelines. Many AI tools are proving tremendously helpful. Some make doctors more efficient. But here’s the real question: will AI tools “de-skill” doctors slowly eroding the mastery that defines the profession?

Historically, concerns about AI-driven de-skilling have been largely speculative. Now, early empirical evidence is emerging. While the data are still preliminary, the signal is strong enough that we should all start paying attention.

Evidence of De-skilling With AI-Assisted Colonoscopies

Recent evidence of de-skilling comes from a 2025 observational study published in The Lancet Gastroenterology & Hepatology. The study examined AI systems designed to detect adenomas—non-cancerous tumors in the GI tract that can sometimes transform into cancer.

In the study, endoscopists who routinely used AI assistance had a significant decline in adenoma detection from 29% to 22% during subsequent non-AI procedures. This suggest that sustained AI exposure can negatively impact measurable clinical performance.

The Cognitive Trap in Doctor De-skilling: Off-Loading and Unengaged Thinking

Experimental research in cognitive psychology helps explain what might be happening. Studies have demonstrated a negative correlation between frequent AI tool use and critical thinking. The mechanism is what’s called cognitive off-loading.

When doctors rely heavily on AI outputs, independent analytic reasoning declines.

This has also been shown in studies outside of healthcare where participants who blindly adopt AI outputs without scrutiny perform worse on complex analytic tasks than those who work independently. The effect is more pronounced in lower-performing users. The investigators attributed the findings to what they called an “unengaged interaction with AI.”

The issue is not AI assistance per se. It is the passive acceptance of its outputs, where the human brain unplugs.

The Risk of “Never-Skilling” in Training Physicians

While passive reliance is a risk for all doctors, its most insidious effect may be on those at the beginning of their training.

A study found that radiologist’s ability to catch AI-generated errors in mammograms correlated strongly with experience. In a simulated scenario where an AI system provided an incorrect suggestion, the rates of correctly read mammograms was 20% for inexperienced radiologists, 25% for the moderately experienced and 46% for the very experienced.

This raises the specter of what is called “never-skilling.” If medical trainees rely on AI-generated differentials before wrestling with clinical ambiguity themselves, the scaffolding of diagnostic reasoning that typically emerges during the years of residency training may never fully develop.

Instead of losing established skills, trainees may fail to fully true mastery in the first place.

This parallels concerns in aviation. Younger pilots trained on automated systems show less manual flying proficiency compared to those trained without automation. This is why modern flight training still mandates hours of manual flying: to ensure pilots have the fundamental skills required to step in when automation fails.

Will Doctors Be De-skilled or Instead Have Evolved Clinical Skills?

The arguments are typically binary: AI will either de-skill doctors OR make them superhuman. The reality is more subtle.

Medicine has always evolved with tools: the stethoscope, CT scanners and electronic health records. Each changed workflows and cognitive demands. Few would argue that the increased use of imaging de-skilled physicians by shifting the emphasis from detailed physical exams to image interpretation and clinical synthesis.

AI may do the same. The key distinction lies between replacement and augmentation.

When AI replaces active reasoning, de-skilling risk increases. However, when AI augments reasoning, providing additional data while requiring clinician interpretation, skills may evolve rather than erode.

The concept of adaptive practice provides a useful framework. It describes the ability of clinicians to shift fluidly between AI-assisted routines and independent problem-solving when uncertainty arises. Critical thinking becomes the core competency that anchors adaptive practice.

Therefore AI literacy, not AI avoidance, may be the protective factor.

What Can Be Done To Mitigate Doctor De-Skilling?

Educational strategies to address the issue are in development.

For example, requiring clinicians—in particular trainees—to generate independent assessments before viewing AI suggestions. However, this may be difficult to implement in practice because of the ubiquity of access to AI systems.

AI systems could also be developed with explainability features to help promote understanding. For instance, instead of merely highlighting a region of a lung scan as “suspicious for malignancy,” an explainable AI system might also provide a heatmap of the pixels that most influenced its decision. This would force a radiologist to engage with the why behind the alert, transforming the AI from a simple autopilot into a vehicle for continuous learning.

Additional strategies include implementing cognitive forcing techniques that prompt users to justify acceptance of AI outputs, or structuring workflows so AI cues can be toggled or delayed rather than automatically displayed.

Importantly, none of these strategies have been validated in trials as effective anti-deskilling interventions.

When it comes to the potential for de-skilling of doctors, medicine’s challenge is not to resist AI but rather to integrate it with intention. This will mean redesigning training to prioritize the cognitive muscles—critical thinking, adaptive practice and skepticism—that AI risks atrophying. AI systems will need to be built that demand human engagement.

Ultimately, AI will change how physicians work. But whether AI tools de-skill doctors or strengthen them depends entirely on how they are implemented. The future physician will undoubtedly rely on algorithms. Perhaps the defining skill will be become a disciplined, human ability to question AI, learn from it and be able to step in when if it makes a mistake.



Source link

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *