Competence

I train doctors.

In my role as a residency program director, I have three major responsibilities:

  1. Recruiting: find and hire medical school graduates.
  2. Curriculum: set educational standards to produce well-qualified internists.
  3. Accreditation: comply with national professional norms and requirements.

Regarding #2, the goal is to have trainees demonstrate competence as doctors at the end of the three year training period (‘residency’). Ideally, they acquire it in steady, graded fashion at distinct mileposts along the way, so that teaching faculty know that residents are making adequate progress and will flourish as independent doctors.

How do professions measure and determine competence? In medical training, residency programs are subject to regulations provided by “Residency Review Committees” which are empowered by a national accrediting body. Those committees come for periodic site visits to inspect our training environment and make sure that we’re following best practices (and the rules…).

It’s up to us to comply with their rules, but we have leeway in interpreting them so that there can be innovation in how we implement our educational models.

Over the last fifteen years, the national governing body was able to choose six “core competencies” that defined competence for doctors of all specialties. Regardless of what kind of medicine you practice, there should be fundamental attributes that all doctors share, right?

Those six competencies are:

  • patient care (duh)
  • medical knowledge (also duh)
  • interpersonal skills and communication (hey, I kinda like that…)
  • professionalism (for sure, right?)
  • systems-based practice (huh?)
  • practice-based learning & improvement (I think you lost me on these last 2)

Give yourself an exercise: Now that you know the domains of competency, how would you evaluate learners in those domains?

Perhaps unsurprisingly, medical educators began resorting to numeric grading scales to evaluate residents in each of these domains. This allowed for quantification of residents’ performances, and a better ability to document both interval progress and ultimate competence.

The problem became that different faculty members interpreted the grading scales differently. Grade inflation starting making nearly everyone look the same, as far as their evaluation numbers were concerned. Asked to define what makes a competent physician, faculty responded along the lines of Supreme Court Justice Potter Stewart, who famously quipped about the hard-to-define-concept of obscenity “…I know it when I see it…”

Voila, welcome to the Next Accreditation System (‘NAS’). Program Directors like me across the country are currently struggling to implement this new system, with a goal of allowing more detailed analyses of learners’ performances. Another goal of the new system is to allow more freedom and flexibility in educational innovation by making reporting requirements more frequent but less onerous (hey-will that work?) to keep educators’ eyes more fixed on teaching and training than on evaluating and reporting. [I’m imagining that in the near-term, there will be a lot of the latter. I hope the former is not diminished…]

The new system is predicated on developmental milestones that lead doctors to become competent in a range of entrustable professional activities (“EPAs”). These EPAs map to the original six competencies which I shared with you above.

Got it?

At a recent national meeting discussing these changes and strategies for handling them, one colleague likened these new mandates to “repairing the airplane while flying it…”

No one ever said that change is easy. Best for us to embrace it and make it a learning opportunity.