Next Article in Journal
STEM and HASS Disciplines in Architectural Education: Readiness of FAD-STU Bachelor Students for Practice
Next Article in Special Issue
Taking the Big Leap: A Case Study on Implementing Programmatic Assessment in an Undergraduate Medical Program
Previous Article in Journal
The Impact and Evaluation of the COVID-19 Pandemic on the Teaching of Biology from the Perspective of Slovak School Teachers
Previous Article in Special Issue
Embedding a Coaching Culture into Programmatic Assessment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Do Resident Archetypes Influence the Functioning of Programs of Assessment?

1
Faculty of Education, Queen’s University, Kingston, ON K7L 3N6, Canada
2
Department of Emergency Medicine, University of Ottawa, Ottawa, ON K1N 6N5, Canada
3
Royal College of Physicians and Surgeons of Canada, Ottawa, ON K1S 5N8, Canada
4
Department of Clinical Neurosciences, University of Calgary, Calgary, AB T2N 1N4, Canada
5
Department of Medicine, University of Alberta, Edmonton, AB T6G 2R3, Canada
6
Department of Medicine, Queen’s University, Kingston, ON K7L 3N6, Canada
*
Author to whom correspondence should be addressed.
Educ. Sci. 2022, 12(5), 293; https://doi.org/10.3390/educsci12050293
Submission received: 14 March 2022 / Revised: 7 April 2022 / Accepted: 14 April 2022 / Published: 20 April 2022
(This article belongs to the Special Issue Programmatic Assessment in Education for Health Professions)

Abstract

:
While most case studies consider how programs of assessment may influence residents’ achievement, we engaged in a qualitative, multiple case study to model how resident engagement and performance can reciprocally influence the program of assessment. We conducted virtual focus groups with program leaders from four residency training programs from different disciplines (internal medicine, emergency medicine, neurology, and rheumatology) and institutions. We facilitated discussion with live screen-sharing to (1) improve upon a previously-derived model of programmatic assessment and (2) explore how different resident archetypes (sample profiles) may influence their program of assessment. Participants agreed that differences in resident engagement and performance can influence their programs of assessment in some (mal)adaptive ways. For residents who are disengaged and weakly performing (of which there are a few), significantly more time is spent to make sense of problematic evidence, arrive at a decision, and generate recommendations. Whereas for residents who are engaged and performing strongly (the vast majority), significantly less effort is thought to be spent on discussion and formalized recommendations. These findings motivate us to fulfill the potential of programmatic assessment by more intentionally and strategically challenging those who are engaged and strongly performing, and by anticipating ways that weakly performing residents may strain existing processes.

1. Introduction

Competency-based models of medical education require a programmatic approach to assessment [1,2]. Conceptually, a program of assessment has been described as a system made up of interrelated and interdependent elements (i.e., people, decision-making tools and processes, and records of reporting), which continually influence one another to achieve larger, complementary purposes (i.e., promotion of personalized learning, identification of resident readiness for independent practice, and information about program efficacy) [3,4]. With programmatic assessment and competency-based medical education (CBME), higher-stakes decisions about resident progress, promotion, or remediation must be defensible and rooted in samples of lower-stakes workplace-based assessment which, when triangulated within workplace-based assessments and across other forms of assessment data (e.g., Objective Structured Clinical Exams, written exams), can demonstrate patterns of performance across contextual variables, such as time, patient/case characteristics, and assessors [5]. Further elaborating on models of programmatic assessment, it has been proposed that a program of assessment in CBME involves subsystems, including two co-dependent cycles (one of knowledge (assessment data/information) production and one of knowledge use), and that high-stakes decisions (e.g., about progress, promotion, and remediation) are only as trustworthy as the knowledge being produced, documented, and used to inform such decisions [6].
Translating programmatic assessment theory to practice has remained a challenge for program leaders and faculty responsible for designing, implementing, and sustaining these systems [7,8]. Programs are highly sensitive to contextual factors, such as the people and resources available to fuel and sustain the system [6]. These people often serve in multiple roles/capacities (e.g., a faculty member who also serves on the Competence Committee), thereby adding to the system complexity. Although guidelines to support the operationalization of programmatic assessment are being published [9], there is still a need for case studies that present implemented models of programmatic assessment in different education contexts to understand and highlight emerging implementation challenges and foster problem-solving across centres.
We have found that most case studies of programmatic assessment take a top-down approach and consider how the design elements of the program of assessment may influence residents’ engagement in learning and performance/achievement [7,10,11,12]. However, it is conceivable that perceived differences in residents’ engagement and strength of performance may reciprocally influence the functioning of the program of assessment, both holistically and when considering subcomponent parts. From a practical perspective, both of these variables (engagement in assessment processes and performance strength) are observable and are often topics of discussion between residents, academic advisors, and competence committee members [13]. From a theoretical perspective, engagement in assessment and the ability to demonstrate strong performance are important contributors to resident success within CBME programs [14]. Thus, the purpose of this research was to take a learner-centred approach to explore how sample resident archetypes (sample profiles along two continua: engagement and performance strength) may influence the workings of programs of assessment across multiple medical specialty training programs and institutions. Specifically, our research questions were as follows: (1) Does our previously developed model of programmatic assessment (PA), which is based on the operationalization of PA within one EM program at Queen’s University [6], reflect the models being operationalized in other institutions and medical specialties?; (2) Is this model of PA sensitive to differences in resident engagement and strength of performance?; and (3) Does resident engagement and strength of performance influence the perceived functioning of programs of assessment in similar ways, across programs and specialties?

Programmatic Assessment Model

The evidence-based model of programmatic assessment used for this study (Figure 1) represents a program of assessment as a system with two co-dependent knowledge (information) cycles: one of knowledge production (red) and one of knowledge use (blue). Information is produced when a faculty member, or a more competent other, formatively assesses a resident’s performance and documents key information about the resident’s performance in their ePortfolio, often in the form of an entrustment score with narrative feedback. Residents and their advisors later use these documented assessments to engage in resident assessment when self- and co-regulating the resident’s learning in advance of and during progress meetings. At a later point in time, Competence Committee members use this same information documented within the resident’s ePortfolio to collaboratively interpret patterns of performance and make high-stakes summative/evaluative decisions about the resident’s progress, promotion, or remediation. Academic Advisors may (or may not) take the lead on presenting and discussing a resident’s performance information (evidence) to the Competence Committee to inform their summative decision making (dependent on policy). Formalized evaluative decisions and feedback from the Competence Committee is then documented in the resident’s ePortfolio and used by the resident and their Academic Advisor to inform ongoing workplace-based learning and assessment opportunities. A key to support interpretation of the model and its components is provided in Figure 1.
This model of programmatic assessment suggests that faculty responsible for conducting low-stakes frontline formative assessments may not know how the same information will be later used to make high-stakes summative/evaluative decisions about residents’ achievement of competence standards. Competence Committee members who use artefacts from workplace-based assessments (e.g., entrustment scores and narrative feedback) may struggle to make high-stakes decisions about resident remediation, progress, or promotion because of incomplete or problematic evidence documented in situ [15]. Thus, there is thought to be a knowledge (information) gap between ‘two communities’ [16]: faculty who initially produce and faculty who later use resident performance information. This knowledge gap is thought to exist because of limited opportunities for faculty to interact, communicate, and adequately document critical information about resident performance.

2. Materials and Methods

2.1. Study Design

We adopted a qualitative, multiple (multi-)case study design to model, interpret, and discuss how resident-level variables, specifically engagement in learning and performance (strength), influence the functioning of four programs of assessment. We studied this influence across different contexts, based on a convenience sample of four medical residency training programs from different institutions, with each representing a different discipline (internal medicine, emergency medicine, neurology, and rheumatology). We chose an interpretive, multi-case study given its utility for building understanding of how educational initiatives work (or fail to work as intended) across contexts, in order to share lessons learned about implementation [17,18].

2.2. Context

The programs of assessment studied were all situated in Canadian residency training programs which had adopted Competency-By-Design (CBD): the Royal College of Physicians and Surgeons of Canada’s implementation of CBME [19]. Within the last 3 years, all of the training programs involved underwent a major revision in the implementation of CBD, including a redesign of assessment programs emphasizing the principles of programmatic assessment. Programs of assessment in CBD are meant to intentionally combine multiple observations of performance using tools fit for purpose, within specialty-specific competency frameworks linking assessments to desired outcomes [1,19]. The frameworks utilize assessment of Entrustable Professional Activities (EPAs) [20], which are specific to the resident’s stage of training, as one component of the program of assessment. These assessments are initiated in the workplace by either faculty or residents and the resultant assessment data is used to inform progression decisions by a Competence Committee (also known in the literature as Clinical Competency Committees). Of note, some programs utilize faculty “Academic Advisors” as longitudinal coaches, and the link between the Competence Committee and residents, in the CBD system [6,21]

2.3. Participant Recruitment

Before commencing data collection, ethics approval was obtained from The Queen’s University Health Sciences and Affiliated Teaching Hospitals Research Ethics Board (HSREB, TRAQ # 6032762). Via email, we recruited members of Competence Committees (inclusive of any Academic Advisors/Coaches) from four diverse postgraduate medical education specialty programs, with each of them coming from a unique university and city within Canada. Of the programmatic assessment stakeholders, Competence Committee members and Academic Advisors were recruited because of their dedicated roles in operationalizing programs of assessment within their respective training programs.

2.4. Data Collection

We conducted separate 90 min virtual focus groups with Competence Committee members from each specialty training program, using a set of pre-determined questions (see Supplementary Materials) and two moderators (JVR and AKH). Using an institutional Zoom license and a semi-structured approach, we first presented one illustrative evidence-informed model of programmatic assessment [6] via screen sharing and then asked participants to (1) reflect on the model and consider whether it aligns with their program of assessment and (2) describe how this model could be revised to more accurately reflect their system of programmatic assessment. Next, we asked each group of participants to explain how four different resident archetypes influence the functioning of their system of programmatic assessment: (1) an engaged resident who is strong performing, (2) an engaged resident who is weak performing, (3) a disengaged resident who is strong performing, and (4) a disengaged resident who is weak performing. Even though the adjective ‘weak’ may not be in keeping with a growth orientation, a mindset identified as being important for CBME [22], we opted to use ‘weak’ because it is a common antonym of strong and it is being used to describe the performance of an archetype and not the potential capacity of the resident.
During focus groups, one moderator (JVR) focused on asking questions and note-taking, while the other moderator (AKH) listened to and adapted the base model of PA in MS PowerPoint to reflect participants’ descriptions. Both moderators asked prompting questions to probe for more detail and clarification as needed (see Supplemental Material for the data collection protocol). The models were shown to participants and revised in real-time using Zoom so that participants could comment on the accuracy of the representations based on their descriptions of each resident archetype. Each of the four resident archetypes was discussed independently, in the same order (1–4), and then compared/contrasted using the ‘slide sorter’ visualization tool in Microsoft PowerPoint. With permission in advance from participants, audio-visuals and written transcripts from each focus group were recorded for download from Zoom.

2.5. Data Analysis

In alignment with semi-structured qualitative research, data analysis began with data collection [18]. As the moderators (JVR and AKH) inductively picked-up on reoccurring and discrepant insights between cases (between archetypes and specialty programs), they asked clarifying questions from the focus group participants to member check the accuracy of adaptations being made to the model of PA in real time. After four focus groups, clear and consistent themes were identified from the participants’ insights, and it became apparent to the moderators that we had enough information power to credibly answer our research questions [23].
Following completion of the fourth focus group, we met virtually as a research team to compare and contrast the models for each case (resident archetype) across specialties/institutions. As a team, we visualized comparison of (1) how each program modified the base model to reflect their system of programmatic assessment more accurately and (2) how all four specialty programs represented the influence of each resident archetype on their system of programmatic assessment. We discussed which similarities could be collapsed across the models and which differences were important and should be noted as meaningful discrepancies, while drawing upon key quotes from the audio transcripts to explain how resident archetype can influence the functioning of programs of assessment.

3. Results

Research participants (n = 17) included competence committee members and academic advisors from four different postgraduate specialty programs across Canada. Table 1 reports the number of participants from each specialty program. Our sample included diversity in medical specialty and program size. We have intentionally omitted the names of the programs’ institutions to protect the confidentiality of the participants.
First, we describe how each program modified the base model to reflect their system of programmatic assessment more accurately. Next, we summarize and discuss how the four specialty programs represent the influence of each resident archetype on the system of programmatic assessment.

3.1. Improvements Made to the ‘Base Model’ of Programmatic Assessment

The model of programmatic assessment involving two co-dependent cycles (one of knowledge (assessment data/information) production and one of knowledge use), initially presented as a base/starting model for discussion [6], resonated with participants from all four programs. Three consistent additions were suggested as improvements to the model, including (1) acknowledgement of additional ‘other assessments’ outside of workplace-based assessments, such as Objective Structured Clinical Exams and written specialty examinations; (2) decision making by the Residency Programs Committee (RPC); and (3) direct communications between faculty and member(s) of the Competence Committee, which circumvent the resident and their e-portfolio (highlighted components within Figure 2). These findings suggest that this newly revised model has some face validity: on the surface, it appears to accurately model how programmatic assessment works in practice.

3.2. Support for the Four Resident Cases (‘Archetypes’)

The four resident cases presented and discussed as archetypes (sample profiles) resonated with participants across all four specialty programs. Even though engagement and performance strength are two continua, participants mentioned and even agreed upon specific residents (past or current) who ‘came to mind’ as a typical example of a resident who met the description of each of the four cases. These findings would also suggest that the proposed resident cases (‘archetypes’) have some face validity.

3.3. Do Resident Archetypes Influence the Functioning of Programs of Assessment?

Participants agreed and were able to easily explain how differences in resident engagement and performance influence the co-dependent cycles of knowledge production and use through interactions between program stakeholders and program elements. First, we will independently describe and then model (Figure 3) the influence of each resident archetype using recurrent ideas and participant quotes from the four programs/focus groups. Next, we will provide a summary comparing and contrasting the influence of each resident archetype on key program elements/relationships (Table 2). To help readers understand the relationship between the notation used in the models (Figure 3) and summary (Table 2), we have described the connections in parentheses for Case 1 (as an example to support interpretation).

3.4. Case (Archetype) 1: Engaged and Strong Performance

Participants agreed that these residents tend to generate ‘lots of assessment data’ through workplace-based assessments with frontline faculty and ‘engage more’—especially with their Academic Advisor (larger arrows, representing more of these interactions and resultant information going into the resident’s ePortfolio). Given the wealth of information documented over time about their performance, review of these residents’ ePortfolios is considered ‘minor’, meaning that not much time is spent interpreting the information to decide their progress unless it is suggested that they be promoted to an ‘accelerated path’ or ‘advanced learning plan’. If this happens, more time is spent discussing and synthesizing evidence to document the expedited learning trajectory and develop the advanced learning plan for approval by the RPC (more or less time is represented as up/down arrows). Otherwise, if these residents are ‘progressing as expected’, very little information is communicated through their Learning Plan (i.e., ‘good job’, ‘keep going’, etc.) (represented as a smaller arrow).

3.5. Case (Archetype) 2: Engaged and Weak Performance

Participants agreed that residents who are engaged but weakly performing also tend to generate ‘lots of assessment data’ and ‘engage more’ with their Academic Advisor. However, much more time is spent by the competence committee generating a reliable and transparent summative decision about their performance. There are thought to be several possible reasons for this, including the large quantity of available workplace-based assessment data, variability in scored performance and feedback across assessments, and vaguely completed assessments (requiring ‘reading between the lines’ [24]) and/or additional information brought forth to competence committee members through ‘backend hallway conversations’ [21]. Consequently, participants agreed that they will sometimes need to ‘go back and target faculty to specifically provide context to make sense of the resident’s data’ or ‘get additional information’ to fill in the blanks and make sense of these residents’ performance. Given the ‘close attention’ devoted to reviewing and discussing these residents’ performance, Competence Committees often have ‘lots of formalized feedback’ to share in their learning plans.

3.6. Case (Archetype) 3: Disengaged and Strong Performance

Participants agreed that residents who are disengaged but strongly performing tend to generate ‘less, but more consistent’ workplace-based assessment data. While Academic Advisors may have to rely more on other assessments to inform their summary of these residents’ performance, discussion and summative decision making by the Competence Committee does not take more time. Often, the summative feedback shared with the resident is simply to ‘get more assessments.’ This feedback tends to temporarily motivate these residents to re-engage in workplace-based assessments; however, even though their performance is strong, their motivation tends to drop off again over time, resulting in ongoing issues of limited performance information.

3.7. Case (Archetype) 4: Disengaged and Weak Performance

Participants agreed that residents who are disengaged and weakly performing tend to generate ‘less and more variable’ workplace-based assessment data. Consequently, Academic Advisors and Competence Committee members tend to spend the most time discussing the performance of these ‘few’ residents. More time is spent ‘making sense of what little assessment data is available’. Consequently, more attention is given to results from other assessments, and Competence Committee members are compelled to consult targeted faculty members to provide more anecdotal information about these residents’ performance. Given the amount and gravity of the formalized feedback to be shared with these residents, the Competence Committee engages the Residency Program Committee in a more in-depth discussion to approve escalating their learning plans to ‘modified’ status. However, despite receiving a modified learning plan that includes ‘big recommendations for improvement’, the participants agreed that these residents’ disengagement in workplace assessments tends to persist.

4. Discussion

The findings of this study offer important and novel insights into the modelling and implementation of programmatic assessment across specialist residency training programs, institutions, and resident archetypes (sample profiles across two continua: engagement and performance strength). First, we found evidence to suggest that our previously developed model of programmatic assessment (Figure 1), which is based on the operationalization of programmatic assessment within one Emergency Medicine program at Queen’s University [6] is reflective of the models being operationalized within four different postgraduate programs and four separate institutions. Second, small but important improvements have been suggested to refine some of the details of this model (Figure 2). Together, these findings suggest that this working model has some face validity for programmatic assessment stakeholders. Third, we have found that our working model is sensitive to differences in resident engagement and strength of performance (Figure 3). Each of the four resident archetypes influenced components of the model in different ways and have yielded some important ‘lessons learned’ for other programs looking to implement, improve upon, or evaluate their program of assessment.
First, we have learned that residents who are engaged in formative assessment and perform strongly likely do not receive as much return on their investment in this model of programmatic assessment as implemented. This finding refutes a key promise of CBME: that all residents, and not just those who are struggling, will benefit from being engaged in a system of PA [5]. We found evidence to suggest that these ‘high-functioning’ residents are likely being ‘short-changed’ in terms of the formalized summative feedback they receive from their Competence Committees. This finding has also been suggested by the Fédération des médicins résidents du Québec’s (FRMQ) survey of residents and their experiences with CBME in Canada [25]: that not all residents are seeing an increase in the quality of formalized summative feedback received via individualized learning plans generated by their Competence Committee. This may result in high-functioning residents becoming increasingly sceptical of their time investment in programmatic assessment if they continue to receive low pedagogical return. That said, if CBME programs are being implemented as intended, these residents should be getting more criterion-referenced feedback than they had before on workplace-based assessments [26]. However, it is still unclear if feedback to residents is actually improving [27,28].
Second, we have learned that despite these programs’ designs and developmental intentions for implementation, weaker performing residents may promote a problem identification paradigm [29], whereby Academic Advisors and Competence Committee members are faced with making sense of ‘problematic evidence’ [15] and reach outside of the formal system of programmatic assessment in place to solicit additional information from faculty. Consequently, this may result in the acquisition and use of ‘less defensible’ data to inform higher-stakes summative decision-making. In our study, participants explained how ‘other systems get activated’ when a resident is ‘flagged’ or thought to be ‘in trouble’. While some of these sub-systems are positive, such as engagement of the Residency Program Committee, other sub-systems, such as informal hallway conversations, emails to the Program Director or individual members of the Competence Committee, or Academic Advisors and/or Competence Committee members drawing upon their memories of first-hand anecdotal experiences working with the residents in question, could be maladaptive in that they have the potential to magnify possible biases and inequities [30].
Together, these two lessons learned motivate us to fulfil the potential of PA in CBME by more intentionally and strategically challenging those who are engaged and strongly performing, and by anticipating ways that weakly performing residents may challenge or strain existing programmatic assessment components and processes. While our study sheds some light on the ways that different resident archetypes can potentially influence the functioning of programs of assessment, we must also consider its limitations. Participants represented a sample of Academic Advisors and/or Competence Committee members from four different specialties and institutions, potentially limiting the generalizability of our findings. However, our sample does include representation from diverse specialties and program sizes across Canadian provinces. In addition, our semi-structured approach to asking focus group questions, while concurrently adapting the base model of programmatic assessment to reflect participants’ experiences, may have steered participants to share their perceived experiences rather than their actual experiences. Further, the software utilized (Microsoft PowerPoint) limited the graphical representation. We did not observe any Competence committee meetings to confirm or refute their self-reported experiences or approaches. Rather, we relied on group discussion and consensus to ratify changes made to components of the model of programmatic assessment to reflect the influence of each resident archetype (presented in Figure 3). Finally, we did not focus in detail on the nuances and differences in the nature of the interactions between frontline faculty assessors and different resident archetypes beyond the generation of assessment data. While our findings suggest that engaged and strongly performing residents have a lack of summative returns, it may be that improved frontline interactions (not reflected in this model) such as more moments of formative assessment and/or more targeted assessments and feedback may be returns not captured here.

5. Conclusions

The continued interest and investment in competency-based approaches to medical education challenge us to better understand how we design, implement, and evaluate programs of assessment to meet the needs of residents, educators, program leaders, and patients. Our participants, representing four residency programs of different sizes and medical specialties, suggest how differences in resident engagement and performance strength can influence the functioning of their programs of assessment in some (mal)adaptive ways. For the few residents who are disengaged and weakly performing, significantly more time is thought to be spent by Academic Advisors and Competence Committee members to make sense of problematic evidence, arrive at a decision, and generate recommendations for improvement. In contrast, for the vast majority of residents who are engaged and perform strongly, significantly less time and energy are spent on discussion and formalized recommendations to challenge their ongoing growth and development. While some trade-offs exist in any program with limited resources, it does seem unfair that those residents who are investing more effort receive less return on their investment. Thus, we are challenged as a medical education community to consider ways that we can make programmatic assessment more equitable in terms of the realized benefits for all learners.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/educsci12050293/s1. Data Collection Protocol: Focus group script and questions.

Author Contributions

Conceptualization, J.V.R., L.C. and A.K.H.; methodology, J.V.R. and A.K.H.; investigation, J.V.R. and A.K.H.; formal analysis, J.V.R., W.J.C., L.C., A.O., S.G. and A.K.H.; writing—original draft preparation, J.V.R. and A.K.H.; writing—review and editing, J.V.R., W.J.C., L.C., A.O., S.G. and A.K.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (TCPS 2) and was approved by The Queen’s University Health Sciences & Affiliated Teaching Hospitals Research Ethics Board (HSREB) (file no. 6032762, 29 June 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data available on request due to privacy restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ross, S.; Hauer, K.E.; Wycliffe-Jones, K.; Hall, A.K.; Molgaard, L.; Richardson, D.; Oswald, A.; Bhanji, F.; Collaborators, I. Key considerations in planning and designing programmatic assessment in competency-based medical education. Med. Teach. 2021, 43, 758–764. [Google Scholar] [CrossRef] [PubMed]
  2. van der Vleuten, C.P.; Schuwirth, L.W.; Driessen, E.W.; Dijkstra, J.; Tigelaar, D.; Baartman, L.K.; van Tartwijk, J. A model for programmatic assessment fit for purpose. Med. Teach. 2012, 34, 205–214. [Google Scholar] [CrossRef] [PubMed]
  3. Schuwirth, L.W.; Van der Vleuten, C.P. Programmatic assessment: From assessment of learning to assessment for learning. Med. Teach. 2011, 33, 478–485. [Google Scholar] [CrossRef]
  4. van der Vleuten, C.P.; Schuwirth, L.W. Assessing professional competence: From methods to programmes. Med. Educ. 2005, 39, 309–317. [Google Scholar] [CrossRef] [PubMed]
  5. Harris, P.; Bhanji, F.; Topps, M.; Ross, S.; Lieberman, S.; Frank, J.R.; Snell, L.; Sherbino, J.; Collaborators, I. Evolving concepts of assessment in a competency-based world. Med. Teach. 2017, 39, 603–608. [Google Scholar] [CrossRef]
  6. Rich, J.V.; Fostaty Young, S.; Donnelly, C.; Hall, A.K.; Dagnone, J.D.; Weersink, K.; Caudle, J.; Van Melle, E.; Klinger, D.A. Competency-based education calls for programmatic assessment: But what does this look like in practice? J. Eval. Clin. Pract. 2020, 26, 1087–1095. [Google Scholar] [CrossRef]
  7. Bok, H.G.; Teunissen, P.W.; Favier, R.P.; Rietbroek, N.J.; Theyse, L.F.; Brommer, H.; Haarhuis, J.C.; van Beukelen, P.; van der Vleuten, C.P.; Jaarsma, D.A. Programmatic assessment of competency-based workplace learning: When theory meets practice. BMC Med. Educ. 2013, 13, 123. [Google Scholar] [CrossRef] [Green Version]
  8. Van Der Vleuten, C.P.M.; Schuwirth, L.W.T.; Driessen, E.W.; Govaerts, M.J.B.; Heeneman, S. Twelve Tips for programmatic assessment. Med. Teach. 2015, 37, 641–646. [Google Scholar] [CrossRef]
  9. Rich, J.V.; Luhanga, U.; Fostaty Young, S.; Wagner, N.; Dagnone, J.D.; Chamberlain, S.; McEwen, L.A. Operationalizing Programmatic Assessment: The CBME Programmatic Assessment Practice Guidelines. Acad. Med. 2021. [Google Scholar] [CrossRef]
  10. Driessen, E.W.; van Tartwijk, J.; Govaerts, M.; Teunissen, P.; van der Vleuten, C.P. The use of programmatic assessment in the clinical workplace: A Maastricht case report. Med. Teach. 2012, 34, 226–231. [Google Scholar] [CrossRef]
  11. Heeneman, S.; Oudkerk Pool, A.; Schuwirth, L.W.; van der Vleuten, C.P.; Driessen, E.W. The impact of programmatic assessment on student learning: Theory versus practice. Med. Educ. 2015, 49, 487–498. [Google Scholar] [CrossRef] [PubMed]
  12. Li, S.A.; Sherbino, J.; Chan, T.M. McMaster Modular Assessment Program (McMAP) Through the Years: Residents’ Experience With an Evolving Feedback Culture Over a 3-year Period. AEM Educ. Train. 2017, 1, 5–14. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Sargeant, J.; Lockyer, J.M.; Mann, K.; Armson, H.; Warren, A.; Zetkulic, M.; Soklaridis, S.; Konings, K.D.; Ross, K.; Silver, I.; et al. The R2C2 Model in Residency Education: How Does It Foster Coaching and Promote Feedback Use? Acad. Med. 2018, 93, 1055–1063. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Sklansky, D.J.; Frohna, J.G.; Schumacher, D.J. Learner-Driven Synthesis of Assessment Data: Engaging and Motivating Residents in Their Milestone-Based Assessments. Med. Sci. Educ. 2017, 27, 417–421. [Google Scholar] [CrossRef] [Green Version]
  15. Pack, R.; Lingard, L.; Watling, C.J.; Chahine, S.; Cristancho, S.M. Some assembly required: Tracing the interpretative work of Clinical Competency Committees. Med. Educ. 2019, 53, 723–734. [Google Scholar] [CrossRef] [Green Version]
  16. Caplan, N. The Two-Communities Theory and Knowledge Utilization. Am. Behav. Sci. 2016, 22, 459–470. [Google Scholar] [CrossRef]
  17. Yazan, B. Three Approaches to Case Study Methods in Education: Yin, Merriam, and Stake. Qual. Rep. 2015, 20, 134–152. [Google Scholar] [CrossRef]
  18. Merriam, S.B.; Tisdell, E.J. Qualitative Research: A Guide to Design and Implementation; John Wiley & Sons: San Fancisco, CA, USA, 2015. [Google Scholar]
  19. Royal College of Physicians and Surgeons of Canada. n.d. What Is Competency-By-Design? Available online: https://www.royalcollege.ca/rcsite/cbd/what-is-cbd-e (accessed on 11 March 2022).
  20. Cate, O.T. Nuts and bolts of entrustable professional activities. J. Grad. Med. Educ. 2013, 5, 157–158. [Google Scholar] [CrossRef] [Green Version]
  21. Cheung, W.J.; Wagner, N.; Frank, J.R.; Oswald, A.; Van Melle, E.; Skutovich, A.; Dalseg, T.R.; Cooke, L.J.; Hall, A.K. Implementation of competence committees during the transition to CBME in Canada: A national fidelity-focused evaluation. Med. Teach. 2022, 1–9. [Google Scholar] [CrossRef]
  22. Richardson, D.; Kinnear, B.; Hauer, K.E.; Turner, T.L.; Warm, E.J.; Hall, A.K.; Ross, S.; Thoma, B.; Van Melle, E.; Collaborators, I. Growth mindset in competency-based medical education. Med. Teach. 2021, 43, 751–757. [Google Scholar] [CrossRef]
  23. Malterud, K.; Siersma, V.D.; Guassora, A.D. Sample Size in Qualitative Interview Studies: Guided by Information Power. Qual Health Res. 2016, 26, 1753–1760. [Google Scholar] [CrossRef] [PubMed]
  24. Ginsburg, S.; Regehr, G.; Lingard, L.; Eva, K.W. Reading between the lines: Faculty interpretations of narrative evaluation comments. Med. Educ. 2015, 49, 296–306. [Google Scholar] [CrossRef] [PubMed]
  25. Fédération des Médicins Résidents du Québec. Year 3 of Implementation of Competence by Design: Negative Impact Still Outweighs Theoretical Benefits; Fédération des Médicins Résidents du Québec: Montréal, QC, Canada, 2020. [Google Scholar]
  26. Hall, A.K.; Rich, J.; Dagnone, J.D.; Weersink, K.; Caudle, J.; Sherbino, J.; Frank, J.R.; Bandiera, G.; Van Melle, E. It’s a Marathon, Not a Sprint: Rapid Evaluation of Competency-Based Medical Education Program Implementation. Acad. Med. 2020, 95, 786–793. [Google Scholar] [CrossRef] [PubMed]
  27. Marcotte, L.; Egan, R.; Soleas, E.; Dalgarno, N.; Norris, M.; Smith, C. Assessing the quality of feedback to general internal medicine residents in a competency-based environment. Can. Med. Educ. J. 2019, 10, e32–e47. [Google Scholar] [CrossRef]
  28. Tomiak, A.; Braund, H.; Egan, R.; Dalgarno, N.; Emack, J.; Reid, M.A.; Hammad, N. Exploring How the New Entrustable Professional Activity Assessment Tools Affect the Quality of Feedback Given to Medical Oncology Residents. J. Cancer Educ. 2020, 35, 165–177. [Google Scholar] [CrossRef]
  29. Hauer, K.E.; Chesluk, B.; Iobst, W.; Holmboe, E.; Baron, R.B.; Boscardin, C.K.; Cate, O.T.; O’Sullivan, P.S. Reviewing residents’ competence: A qualitative study of the role of clinical competency committees in performance assessment. Acad. Med. 2015, 90, 1084–1092. [Google Scholar] [CrossRef] [Green Version]
  30. Tweed, M.J.; Thompson-Fawcett, M.; Wilkinson, T.J. Decision-making bias in assessment: The effect of aggregating objective information and anecdote. Med. Teach. 2013, 35, 832–837. [Google Scholar] [CrossRef]
Figure 1. A model of programmatic assessment (reproduced with permission) illustrating co-dependent cycles of knowledge (information) production (red) and use (blue). Double-headed arrows depict interactions between program stakeholders (e.g., performance assessments, Academic Advisor meetings, Competence Committee meetings) and directional arrows depict the flow of knowledge (information) through the co-dependent cycles of production and use. Even though the Program Director (PD) and CBME Lead are members of the Competence Committee in this model, they also oversee the residency program, including the system of programmatic assessment. AA—Academic Advisor; EP—ePortfolio; PLP—Personal Learning Plan.
Figure 1. A model of programmatic assessment (reproduced with permission) illustrating co-dependent cycles of knowledge (information) production (red) and use (blue). Double-headed arrows depict interactions between program stakeholders (e.g., performance assessments, Academic Advisor meetings, Competence Committee meetings) and directional arrows depict the flow of knowledge (information) through the co-dependent cycles of production and use. Even though the Program Director (PD) and CBME Lead are members of the Competence Committee in this model, they also oversee the residency program, including the system of programmatic assessment. AA—Academic Advisor; EP—ePortfolio; PLP—Personal Learning Plan.
Education 12 00293 g001
Figure 2. Revised model of programmatic assessment [6] with improvements (highlighted), which include the addition of ‘other assessments’; communication between the Competence Committee and the Residency Programs Committee (RPC); and direct communications between faculty assessors and member(s) of the Competence Committee, which circumvent the resident and their e-portfolio (new double-headed arrows).
Figure 2. Revised model of programmatic assessment [6] with improvements (highlighted), which include the addition of ‘other assessments’; communication between the Competence Committee and the Residency Programs Committee (RPC); and direct communications between faculty assessors and member(s) of the Competence Committee, which circumvent the resident and their e-portfolio (new double-headed arrows).
Education 12 00293 g002
Figure 3. Models depicting the influence of each resident archetype on key program elements and activities. Note that the size of arrows has been modified from the revised base model to represent more or less of key program elements and activities (also summarized below in Table 2). For example, for the engaged and weak performing resident, there is more anecdotal information seeking between members of the Competence Committee and frontline faculty assessors, thereby bypassing the resident and their ePortfolio.
Figure 3. Models depicting the influence of each resident archetype on key program elements and activities. Note that the size of arrows has been modified from the revised base model to represent more or less of key program elements and activities (also summarized below in Table 2). For example, for the engaged and weak performing resident, there is more anecdotal information seeking between members of the Competence Committee and frontline faculty assessors, thereby bypassing the resident and their ePortfolio.
Education 12 00293 g003
Table 1. Summary of study participants.
Table 1. Summary of study participants.
Specialty ProgramProgram Size (Number of Residents) Number of ParticipantsSize of Competence Committee 1
RheumatologySmall (4–5) n = 6N = 6
NeurologyMedium (20)n = 3N = 10
Emergency MedicineLarge (50)n = 6N = 7
Internal MedicineLarge (68)n = 2N = 10
4 different specialty programsDiversity in program sizeTotal n = 17Total N = 33
1 Size of Competence Committee was inclusive of Academic or Assessment Advisors.
Table 2. Summary of the influence of each resident archetype on key program elements and activities.
Table 2. Summary of the influence of each resident archetype on key program elements and activities.
Key Program Elements and
Activities (in Parentheses)
Engaged and Strong Performance Engaged and Weak PerformanceDisengaged and Strong PerformanceDisengaged and Weak Performance
Faculty and resident engagement in workplace-based assessment (formative assessment)
Faculty and the competence committee bypassing resident and ePortfolio (anecdotal information seeking)
Other assessments of resident included in ePortfolio (additional use of)
Interaction between resident and advisor/coach during progress meetings (co-regulation of learning)
Academic advisor reviewing/discussing with the competence committee presentation of evidence
Competence committee review of the ePortfolio/dashboard (summative assessment/evaluation)↓/↑
Competence committee discussion with the residency program committee (review and approval)
Competence committee documenting within the personal learning plan (feedback from summative assessment)
Resident engagement with their personal learning plan (self-regulated learning)↓/↑
Notes: Legend: ↑ more of; ↓ less of; − baseline.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rich, J.V.; Cheung, W.J.; Cooke, L.; Oswald, A.; Gauthier, S.; Hall, A.K. Do Resident Archetypes Influence the Functioning of Programs of Assessment? Educ. Sci. 2022, 12, 293. https://doi.org/10.3390/educsci12050293

AMA Style

Rich JV, Cheung WJ, Cooke L, Oswald A, Gauthier S, Hall AK. Do Resident Archetypes Influence the Functioning of Programs of Assessment? Education Sciences. 2022; 12(5):293. https://doi.org/10.3390/educsci12050293

Chicago/Turabian Style

Rich, Jessica V., Warren J. Cheung, Lara Cooke, Anna Oswald, Stephen Gauthier, and Andrew K. Hall. 2022. "Do Resident Archetypes Influence the Functioning of Programs of Assessment?" Education Sciences 12, no. 5: 293. https://doi.org/10.3390/educsci12050293

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop