Introduction

This blog is about medical education in the US and around the world. My interest is in education research and the process of medical education.



The lawyers have asked that I add a disclaimer that makes it clear that these are my personal opinions and do not represent any position of any University that I am affiliated with including the American University of the Caribbean, the University of Kansas, the KU School of Medicine, Florida International University, or the FIU School of Medicine. Nor does any of this represent any position of the Northeast Georgia Medical Center or Northeast Georgia Health System.



Wednesday, October 1, 2025

The Dual Identity of Clinician-Teachers: Navigating Boundaries and Burdens

The Dual Identity of Clinician-Teachers: Navigating Boundaries and Burdens

Every clinician-teacher knows the feeling: one moment you are focused on patient care—managing a complex case, coordinating the team, navigating the electronic medical record—and the next moment, a medical student or resident is by your side, asking a question, eager to learn. In that instant, you are not only a physician but also a teacher, expected to explain, model, and inspire. This dual identity is both the beauty and the burden of academic medicine.

In recent years, there has been an attempt to examine more deeply what it means to live at the intersection of the clinician and teacher roles. A 2024 article in Medical Education by Alexandraki (1) highlights the tensions and opportunities that arise when these identities overlap. The author describes how clinical educators navigate role boundaries, often without explicit institutional support, and how these tensions shape both the quality of teaching and the satisfaction of faculty members. This got me thinking more about the pressures that our faculty are under.

The Boundaries of Identity

Clinician-teachers face constant negotiation of boundaries. On one hand, patient care demands efficiency, accuracy, and often rapid decision-making while on the other hand, teaching requires slowing down, making thinking visible, and creating space for questions. Some of the very qualities that make one an excellent physician—efficiency, decisiveness, independence—can conflict with the qualities that make one an excellent teacher—patience, transparency, collaboration.

The article notes that these boundaries are often blurred. Teaching can enhance clinical care by improving communication, reinforcing clinical reasoning, and engaging learners as team members. Yet, teaching can also slow workflow, reduce productivity, and create friction in systems that prioritize volume over value. An oft-quoted study by Vinson and colleagues (2) found that when a student was in their practice, a family physician sees less patients. In fact, their clinical productivity decreased from 3.9 to 3.3 patients per hour. Clinician-teachers are left to reconcile these conflicting demands, often without formal recognition or protected time.

Systemic Pressures

The challenges of dual identity are not just personal—they are systemic. In most practices and institutions, clinical productivity is measured in relative value units (RVUs), a measure of the work done in a patient encounter, while teaching contributions are harder to quantify. Dr Vinson (2) found that the amount of time the faculty physician spent working increased by 52 minutes per day. Another study by Denton, Pangaro, and colleagues (3) found that having a medical student working with a physician in the outpatient internal medicine clinic adds 32 minutes to a clinic session. Faculty may be praised for “going above and beyond” in teaching, but rarely are they rewarded in tangible ways. In medical schools, promotion systems often prioritize research and academic recognition while leaving educational contributions undervalued. As a result, clinician-teachers may feel like they are constantly falling short of their clinical expectations. Grayson and colleagues (4) surveyed preceptors and found that 61% reported a decrease in the number of patients seen.

This misalignment creates stress and, over time, may contribute to burnout. Clinician-teachers often report feelings of being pulled in too many directions, of their educational work being “invisible labor,” and of struggling to sustain enthusiasm in the face of mounting clinical pressures. These challenges may be compounded for women, underrepresented minorities, and international faculty, who may experience additional burdens related to equity, representation, and bias. 

The Rewards of Dual Identity

Yet, the story is not only about burden—it is also about joy. Many clinician-teachers describe teaching as the most rewarding part of their day. Educating the next generation of physicians provides a sense of purpose, continuity, and meaning. It reinforces one’s own knowledge, sharpens clinical reasoning, and fosters professional community. Learners can bring fresh perspectives, challenge assumptions, and remind faculty why they entered medicine in the first place. Dr Grayson’s survey of preceptors (4) found that: 82% felt more enjoyment of the practice of medicine, while 66% spent increased amount of time reviewing clinical medicine, and 49% had a desire to keep up with recent developments in medicine.

The dual identity, if well supported, can be profoundly enriching. It offers a professional life that integrates service and scholarship, mentorship, and practice. Clinician-teachers often become role models not only for clinical excellence but also for professional identity formation, showing learners how to balance compassion with competence, efficiency with empathy. 

Strategies for Support

How can institutions better support clinician-teachers in navigating these boundaries? 

1. Clarify Expectations – Institutions should clearly define the role of clinician-teachers, setting realistic expectations for clinical productivity and educational engagement. Without clarity, faculty are left to guess, often feeling like they are falling short in both domains.

2. Provide Protected Time – Teaching should not be an extracurricular activity squeezed between patient visits and after hours. Providing protected time for education signals that it is valued and essential, not optional.

3. Recognize and Reward Teaching – Promotion and tenure criteria must align with the real contributions of clinician-teachers. This means valuing curriculum design, mentorship, and educational leadership as much as publications and RVUs.

4. Invest in Faculty Development – Clinician-teachers need training not only in pedagogy but also in boundary negotiation, time management, and professional identity formation. Faculty development programs should explicitly address the dual identity challenge.

5. Promote Equity – Support should be distributed fairly, with attention to equity across gender, race, specialty, and career stage. Institutions should be intentional about mentorship and sponsorship for underrepresented groups. Finances should not just support the high dollar specialties. 

6. Model Integration – Leaders who themselves are clinician-teachers can model integration of roles, demonstrating how clinical care and teaching can enrich one another rather than compete.

Looking Forward

The future of academic medicine depends on clinician-teachers who can thrive in both roles. As medicine becomes more complex, the need for skilled educators embedded in clinical settings will only grow. If institutions fail to support this dual identity, they risk losing talented faculty and weakening the pipeline of future educators.

The challenge, then, is not to choose between clinician and teacher but to create systems where both roles are fully supported and mutually reinforcing. This requires intentional policies, cultural shifts, and recognition that teaching is not peripheral but central to the mission of academic medicine.

In the end, the dual identity of clinician-teachers is not a problem to be solved but a reality to be embraced. By acknowledging the boundaries, addressing the burdens, and celebrating the rewards, we can create an environment where clinician-teachers not only survive but flourish. And when they flourish, so too do the learners and patients they serve.


REFERENCES

1) Alexandraki I. Exploring the boundaries between clinician and teacher. Med Educ 2025; 59 (2): 136-138. doi:10.1111/medu.15586. 

2) Vinson DC, Paden C, Devera-Sales A. Impact of medical student teaching on family physicians' use of time. Journal of Family Practice  1996; 42 (3): 243-249. PMID: 8636675.

3) Denton GD, Durning SJ, Hemmer PA, & Pangaro LN. RESEARCH BASIC TO MEDICAL EDUCATION: A Time and Motion Study of the Effect of Ambulatory Medical Students on the Duration of General Internal Medicine Clinics. Teaching and Learning in Medicine  2005; 17 (3): 285–289. https://doi.org/10.1207/s15328015tlm1703_15

4) Grayson MS, Klein M, Lugo J, & Visintainer P. Benefits and Costs to Community-Based Physicians Teaching Primary Care to Medical Students. J General Internal Medicine  1998; 13 (7): 485-488. https://doi.org/10.1046/j.1525-1497.1998.00139.x


Wednesday, September 3, 2025

Residency Selection in the Age of Artificial Intelligence: Promise and Peril

 Residency Selection in the Age of Artificial Intelligence: Promise and Peril

Residency selection has always been a high-stakes, high-stress process. For applicants, it can feel like condensing years of study and service into a few fleeting data points. For programs, it is like drinking from a firehose—thousands of applications to sift through with limited time and resources, and an abiding fear of missing the “right fit.” In recent years, the pressures have only grown: more applications, Step 1 shifting to pass/fail, and increased calls for holistic review in the name of equity and mission alignment. 

Into this crucible comes artificial intelligence (AI). Advocates promise that AI can tame the flood of applications, find overlooked gems, and help restore a measure of balance to an overloaded system. Critics worry that it will encode and amplify existing biases, creating new blind spots behind the sheen of algorithmic authority. A set of recent papers provide a window into this crossroads, with one central question: will AI be a tool for fairness, or just a faster way of making the same mistakes?

What We Know About Interviews   

Before even considering AI, it helps to step back and look at the traditional residency interview process. Lin and colleagues (1) recently published a systematic review of evidence-based practices for interviewing residency applicants in Journal of Graduate Medical Education. Their review of nearly four decades of research is sobering: most studies are low to moderate quality, and many of our cherished traditions—long unstructured interviews, interviewer “gut feelings”—have little evidence behind them. What does work? Structure helps. The multiple mini interview (MMI) shows validity and reliability. Interviewer training improves consistency. Applicants prefer shorter, one-on-one conversations, and they value time with current residents. Even virtual interviews, despite mixed reviews, save money and broaden access. 

In other words, structure beats vibe. If interviews are going to continue as a central part of residency selection, they need to be thoughtfully designed and consistently delivered.

The Scoping Review: AI Arrives 

The most important new contribution to this debate is Sumner and colleagues’ scoping review in JGME (2). They examined the small but growing literature on AI in residency application review. Of the twelve studies they found, three-quarters focused on predicting interview offers or rank list positions using machine learning.  Three articles used natural language processing (NLP) to review and analyze letters of recommendation. 

The results are promising but fragmented. Some models could replicate or even predict program decisions with decent accuracy. Others showed how NLP might highlight subtle patterns in narrative data, such as differences in the language of recommendation letters. But strikingly, only a quarter of the studies explicitly modeled bias. Most acknowledged it as a limitation but stopped short of systematically addressing it. The authors conclude that AI in residency recruitment is here, but it is underdeveloped, under-regulated, and under-evaluated. Without common standards for reporting accuracy, fairness, and transparency, we risk building shiny black boxes that give an illusion of precision while quietly perpetuating inequity.

Early Prototypes in Action

Several studies give us a glimpse of what AI might look like in practice. Burk-Rafel and colleagues at NYU (3) developed a machine learning–based decision support tool, trained on over 8,000 applications across three years of internal medicine interview cycles. The training data included 61 features, including demographics, time since graduation, medical school location, USMLE scores or status, awards (AOA), and publications among many others. Their model achieved an area under the ROC of 0.95 and even performed well (0.94) without USMLE scores. Interestingly, when deployed prospectively, it identified twenty applicants for interview who had been overlooked by human reviewers, many of whom later proved strong candidates. Here, AI wasn’t replacing judgment but augmenting it, catching “diamonds in the rough” that busy faculty reviewers had missed.

Rees and Ryder’s work (4) published in Teaching and Learning in Medicine took a different angle, building machine learning algorithm Random Forest models to predict ranked applicants and matriculants in internal medicine. Their models could predict with high accuracy (area under ROC 0.925) who would be ranked, but struggled to predict who would ultimately matriculate (area under ROC 0.597). The lesson: AI may be able to mimic program decisions, but it is far less certain whether those decisions correlate with outcomes that matter—like performance, retention, or alignment with mission.

Finally, Hassan and colleagues in the Journal of Surgical Education (5) directly compared AI with manual selection of surgical residency applicants. Their findings were provocative: the two applicant lists (AI selected vs PD selected) only had an overlap of 7.4%. AI was able to identify high-performing applicants with efficiency comparable to traditional manual selection, but there were significant differences. The AI selected applicants who were more frequently white/Hispanic (p<0.001), more US medical graduates (p=0.027), younger (p=0.024), and had more publications (p<0.001). This raises questions about both list generation processes. There are questions transparency and acceptance by faculty. Programs faculty trust their own collective wisdom, but will they trust an machine learning process that highlights candidates they initially passed over?

Where AI Could Help

Taken together, these studies suggest that AI could help in several ways:

- Managing volume: AI tools can quickly sort thousands of applications, highlighting candidates who meet baseline thresholds or who might otherwise be filtered out by crude metrics.
- Surfacing hidden talent: By integrating many data points, AI may identify applicants overlooked because of a single weak metric, such as a lower Step score or an atypical background.
- Standardizing review: Algorithms can enforce consistency, reducing the idiosyncrasies of individual reviewers.
- Exposing bias: When designed well, AI can make explicit the patterns of selection, shining light on where programs may unintentionally disadvantage certain groups.

Where AI Could Harm

But the risks are equally real:

- Amplifying bias: Models trained on past decisions will replicate the biases of those decisions. If a program historically favored certain schools or demographics, the algorithm will “learn” to do the same.
- False precision: High AUROC scores may mask the reality that models are only as good as their training data. Predicting interviews is not the same as predicting good residents.
- Transparency and trust: Faculty may resist adopting tools they don’t understand, and applicants may lose faith in a process that feels automated and impersonal.
- Gaming the system: When applicants learn which features are weighted, they may tailor applications to exploit those cues—turning AI from a tool for fairness into just another hoop to jump through.

Broad Reflections: The Future of Recruitment

What emerges from these studies is less a roadmap and more a set of crossroads. Residency recruitment is under enormous pressure. AI offers tantalizing relief, but also real danger.

For programs, the key is humility and intentionality. AI should never completely replace human judgment, but it can augment it. Program directors can use AI to help manage scale, to catch outliers, and to audit their own biases. But the human values—commitment to service, value in diversity, and the mission of training compassionate physicians—cannot be delegated to an algorithm.

For applicants, transparency matters most. A process already viewed as opaque will only grow more fraught if decisions are seen as coming from a black box. Clear communication about how AI is being used, and ongoing study of its impact on residency selection is essential. 

For the medical education community, the moment calls for leadership. We need reporting standards for AI models, fairness audits, and shared best practices. Otherwise, each program will reinvent the wheel—and the mistakes.

Residency recruitment has always been an imperfect science, equal parts art and data. AI does not change that. What it does offer is a new lens—a powerful, potentially distorting one. Our task is not to embrace it blindly nor to reject it out of fear, but to use it wisely, always remembering that behind every application is a human being hoping for a chance to serve.

References

(1) Lin JC, Hu DJ, Scott IU, Greenberg PB. Evidence-based practices for interviewing graduate medical education applicants: A systematic review. J Grad Med Educ. 2024; 16 (2): 151-165.

(2) Sumner MD, Howell TC, Soto AL, et al. The use of artificial intelligence in residency application evaluation: A scoping review. J Grad Med Educ. 2025; 17 (3): 308-319.

(3) Burk-Rafel J, Reinstein I, Feng J, et al. Development and validation of a machine learning–based decision support tool for residency applicant screening and review. Acad Med. 2021; 96 (11S): S54-S61.

(4) Rees CA, Ryder HF. Machine learning for the prediction of ranked applicants and matriculants to an internal medicine residency program. Teach Learn Med. 2022; 35 (3): 277-286.

(5) Hassan S, et al. Artificial intelligence compared to manual selection of prospective surgical residents. J Surg Educ. 2025; 82 (1): 103308.