Next Article in Journal
Implementation of Service-Learning as a Strategy to Foster Intercultural Coexistence in the Local Community: A Case Study
Next Article in Special Issue
From Traditional to Programmatic Assessment in Three (Not So) Easy Steps
Previous Article in Journal
The Influence of School Factors on Students’ Mathematics Achievements in Trends in International Mathematics and Science Study (TIMSS) in Abu Dhabi Emirate Schools
Previous Article in Special Issue
Do Resident Archetypes Influence the Functioning of Programs of Assessment?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Taking the Big Leap: A Case Study on Implementing Programmatic Assessment in an Undergraduate Medical Program

1
Medical Education Unit, Section of Medicine, University of Fribourg, 1700 Fribourg, Switzerland
2
Med-Office, Section of Medicine, University of Fribourg, 1700 Fribourg, Switzerland
3
Cantonal Hospital of Fribourg, 1752 Villars-sur-Glâne, Switzerland
*
Author to whom correspondence should be addressed.
Educ. Sci. 2022, 12(7), 425; https://doi.org/10.3390/educsci12070425
Submission received: 23 May 2022 / Revised: 17 June 2022 / Accepted: 20 June 2022 / Published: 22 June 2022
(This article belongs to the Special Issue Programmatic Assessment in Education for Health Professions)

Abstract

:
The concept of programmatic assessment (PA) is well described in the literature; however, studies on implementing and operationalizing this systemic assessment approach are lacking. The present case study developed a local instantiation of PA, referred to as Assessment System Fribourg (ASF), which was inspired by an existing program. ASF was utilized for a new competency-based undergraduate Master of Medicine program at the State University of Fribourg. ASF relies on the interplay of four key principles and nine main program elements based on concepts of PA, formative assessment, and evaluative judgment. We started our journey in 2019 with the first cohort of 40 students who graduated in 2022. This paper describes our journey implementing ASF, including the enabling factors and hindrances that we encountered, and reflects on our experience and the path that is still in front of us. This case illustrates one possibility for implementing PA.

1. Introduction

Although the field of programmatic assessment (PA) has rich theoretical and conceptual literature, there are few studies on how to implement it. This case study aimed to design and implement PA for a new undergraduate medical program at the state University of Fribourg in Switzerland. This paper describes our context and drivers to choose PA, our instantiation of PA, referred to as Assessment System Fribourg (ASF), and reflections on our journey.

1.1. Context of Medical Education

1.1.1. Switzerland

The undergraduate medical training in Switzerland is divided into a 3-year bachelor’s and 3-year master’s programs that are under the authority of the Universities [1]. Most students begin their medical education directly following high school. The Federal Law on Medical Professions (MedPA) regulates Swiss undergraduate and postgraduate medical education. This law enforces three instruments to regulate undergraduate education: (1) the federal licensing examination (FLE) that delivers the federal physician diploma required to enter postgraduate training; (2) an outcomes-based national framework of reference called PROFILES [2,3] that describes what is expected of a resident entering postgraduate training and determines the content of the FLE; and (3) a mandatory accreditation process (see Figure 1).

1.1.2. Fribourg

The shortage of trained physicians in Switzerland has prompted the federal government to encourage universities to increase the number of medical graduates. The Canton of Fribourg, which is one of the regions with the lowest physician density in Switzerland [4], has seized the opportunity to expand its offering of basic medical training. At the end of 2016, the State Council of the Canton of Fribourg mandated the University and the hospital to create a master’s degree program with the objectives of “promoting the professional choice of family medicine” and “meeting the needs of the population,” as well as the long-term goal of increasing the number of established family physicians in the Canton.

1.1.3. University of Fribourg

The University of Fribourg offered a solid bilingual (French and German) Bachelor of Medicine with partially integrated modules and a traditional assessment approach with written multiple-choice question (MCQ) exams and short essays, practical and oral exams, and an objective structured clinical exam (OSCE). Each year, the 80 bachelor graduates had to pursue their undergraduate master’s studies in one of the five Swiss medical faculties. In 2016, the University worked with partner teaching hospitals to create a bilingual Master of Medicine program (MMed) and a feasibility study that estimated the enrollment capacity of 40 students and the additional resources needed to run the program. The first professors were hired in late 2017, and the detailed program was developed jointly with our ASF, which implements the principles of PA. The first student cohort started the program in 2019.

2. Why Programmatic Assessment?

“Have you ever been a part of an idea whose time has come?”
Harold Lyon, guest Fulbright Professor
The overarching goal of ASF is to support and promote the development of students’ competencies and learning autonomy to best prepare them for their entry into postgraduate education. A constellation of drivers led us to consider and adopt PA as the foundation of ASF.

2.1. Drivers

2.1.1. Competency-Based Education

The main driver for ASF was certainly PROFILES [3], the third version of the Swiss Reference Framework for Basic Education introduced in 2017 [5] and, consequently, the main reference for our new MMed. PROFILES is consistently outcome- and competency-oriented, so we decided to build the program around this educational paradigm and chose PA to align the assessment approach [6].

2.1.2. Family Medicine

A major driver was the focus of our new master’s program on family medicine. We felt that PA and the concept of evaluative judgment (the ability to make decisions about the quality of one’s own work and that of others) [7] could give students more ownership and promote their autonomy and self-efficacy, qualities known to increase students’ confidence in practicing family medicine [8], where physicians are more autonomous and experience more uncertainty than in other professions.

2.1.3. Cohort Size

We adapted the design of our program and assessment system to the limitations and opportunities of having cohorts of 40 students. Creating numerous and lengthy high-stake MCQs that encompassed each teaching unit and achieve psychometric standards for only 40 students presented a huge effort; however, this was offloaded to a great extent on a progress test acquired externally. We kept specific formative MCQs and directed the remaining teaching resources toward personal student guidance using a learning advisor and individual assessment using a student progress committee.

2.1.4. Differentiating Factor

With ASF, we have been able to add an innovative element that underscores the disruptive capacity of smaller structures [9] and helps us differentiate our program from what our long-established and larger neighboring medical schools offer.

2.2. Enabling Factors and Hindrances

Some of the driving forces were strong enablers for the implementation of ASF as PROFILES and its competency-based medical education (CBME) approach. Cohort size is another factor that facilitates maintaining proximity among students, faculty, and the school and avoiding anonymity that might foster disconnection among various stakeholders. The fact that we began the program with almost no prior history can be seen as going both ways. The lack of a prior program allowed us to select available evidence and best practices to avoid the many well-known problems with the assessment that occurs in traditional programs [10]. However, many faculties were inexperienced in managing and implementing a program and did not understand CBME; they were unaware of the issues that CBME and ASF address and tended to consider the new educational activities promoting learning and related to ASF unnecessary and competing for “real” lecturing time.
Leaving the well-known realm of the summative assessment for the formative approach was uncharted territory for almost all parties involved, which included significant uncertainties and fears that could trigger, even at a later stage, fierce resistance. Clear and consistent principles, open and transparent communication, and strong backing from the institution were essential in those critical moments. In addition, we found that informing students, teachers, and support staff that unclear situations may occur and striving to establish a partnership approach to find solutions without compromising core principles were very helpful.
The requirement of the European Bologna system that all modules be validated separately to receive ECTS credits (European Credit Transfert System) was a potentially major obstacle to our continuous assessment system. Fortunately, our university regulations allowed us to allocate ECTS credits to different parts of the program and award all credits at once at the end of the year. The regulations also allowed the operation of the program without grades.
The new active teaching role of assessment impacts how the administrative support of the program needs to be adapted. It took us some time to adjust to a higher degree of collaboration between the program coordination and the assessment office and shift some responsibilities.
Finally, the Portfolio represents a central element of ASF. For students, it means a central tool that manages all the pieces of evidence; it represents an essential link to their learning advisors, and the clinical teachers use it to enter their evaluations and feedback. This complexity and visibility raised technical issues (usability, accessibility, aggregation of data, data protection) and organizational challenges (workflow, processes) that took us more time, resources, and reflection than anticipated.

3. Description of Assessment System Fribourg (ASF)

“The goal of education, if we are to survive, is the facilitation of change and learning.”
Carl Rogers
ASF was inspired by the Cleveland Clinic Lerner College of Medicine (CCLCM) program [11] and developed jointly with the master’s curriculum from spring 2018 to fall 2019, when the first cohort began MMed. The architecture is based on a formative, year-long assessment program culminating in promotion decisions at the end of the year. ASF presents four key concepts that interplay with nine program elements, allowing the whole system to be greater than the sum of its parts (see Figure 2).

3.1. Four Key Concepts

3.1.1. Formative Approach

We have decided that all assessments in MMed should be formative. Each assessment activity is considered a data point; it should provide useful information that allows students to progress in developing their skills and learning. Considering a whole assessment program to be formative (including the summative end-of-year promotion decision) implies going beyond the concept of “assessment that does not count”; it calls for a deeper understanding of the approach that leads to a cascade of considerations. We used two guideposts when reflecting on formative assessment. First, Black and Wiliam defined five strategies for formative assessment based on three questions (“Where is the learner going?” “Where are they currently?” “How will they get there?”), which can be answered by three stakeholders: teachers, peers, and learners [12]. Second, pre-, pure-, and post-assessment effects [13,14,15] have a different formative thrust that can be used appropriately.
To support the formative nature of the program, we have decided to avoid grades altogether, as they undermine the formative dynamic. Grades are probably one of the worst forms of feedback with a normative message (competition, learning by assessment) and a low degree of specificity [16]. Interest in more detailed feedback often wanes once students receive their grades [17], and moving to a no-grades approach changes their behavior [18]. That said, teachers and students alike rely very much on grades for orientation; they have difficulty imagining that they are working in their absence; the suppression of grades is often understood as the suppression of evaluations. Even students who desperately need feedback may not use it [19]; this is also true for teachers, who may not always see the need to invest energy and time when grades are not available (especially high-stake grades).

3.1.2. The Feedback Culture

Merely giving our students the correct feedback information is insufficient. We adopted the definition of feedback provided by Boud et al.: “Feedback is a process whereby learners obtain information about their work to appreciate the similarities and differences between the appropriate standards for any given work, and the qualities of the work itself, to generate improved work” [20]; this definition implies a shift of attention from giving helpful information to how students productively use feedback information in their learning process [21]. Students gain ownership of their training in driving their learning, and in soliciting and generating their feedback. Being in charge of their training and the absence of grades forces students to reduce their dependence on external judgment to evaluate the quality of their work and learning (and that of others), in what Boud et al. call “Evaluative Judgment” [7]. With this perspective, the responsibility of the program and teachers goes beyond providing feedback information; it is to ensure a sustainable feedback approach by planning appropriate feedback activities. It ensures the quality (informative, constructive, and timely) of feedback information [22,23].
This reliance on student autonomy in the feedback process goes hand in hand with a duty to train them in how to give, receive, and use feedback information effectively. The school must also provide timely personalized support [24] for students to learn how to manage these challenging tasks.

3.1.3. Reflective Practice

Competence can be considered more a habit of lifelong learning than an achievement [25], and this habit of mind includes critical curiosity, self-awareness, presence, and reflection [26,27]. Thus, we considered reflection an essential aspect underpinning all 7 CanMEDS roles [28]. We based our clinical curriculum (18 months of clinical immersion: 30 weeks of clinical rotations and 11 months of clerkship) on an epistemology of practice [29] in which we expect our students to learn from the experience through “reflection on action” and “reflection in action” [30].

3.1.4. Assessment as a Continuum of Stakes

Since all assessment is formative, the distinction between formative and summative assessment is not meaningful and can even be confusing (our end-of-year “summative” assessment is also formative) and goes against the grain of the formative approach. Following the principles of PA, we have instead chosen to differentiate appraisal activities according to their use, namely, the impact of their results on the promotion decision at the end of the year. We chose to use a three-tiered scale: low, medium, and high stakes. The individual data points with low stakes have no direct impact on promotion. Medium-stake assessments represent an aggregation of many low-stake data points or comprehensive reviews; they influence promotion decisions, but do not decide on their own. Promotion decisions represent the only high-stake assessment in ASF. They are based on what might be called a “thick description” of student performance in the various competency domains, resulting from the aggregation and reflection of numerous low- and medium-stake data points from multiple sources. Practically, this means that all assessment activities count. There are no no-stake exams, as each represents a data point. While medium-stake assessments, such as end-of-rotation evaluation, may not always be considered sufficient, this does not amount to a Fail. Pass/Fail decisions are taken only in high-stake assessments, which occur during end-of-year evaluations.

3.2. Nine Program Elements

3.2.1. Competency Domains

With PROFILES [3], we had a national competency-based framework that should make it easy for us to shape our local frame of reference; this proved to be a challenge that we are still working on. The seven CanMEDS roles and the nine entrustable professional activities (EPAs) defined in PROFILES were, in our view, too many interrelated dimensions for students to track. Using either the roles or the EPAs would have led to blind spots. If we kept only the competencies of the roles, we would not adequately cover the specifics of the EPAs. Additionally, the generic nature of the EPAs for students made them inappropriate for guiding all of the required learning; we also had to consider the goals that the State Council had set for MMed (family medicine and population needs). Finally, we grouped all dimensions into eight competency domains (CD), which became the reference for our CBME program and ASF (see Figure 3).
For each CD, we have defined milestones that describe the expected level at the end of each academic year. The milestones serve as a reference for promotion decisions at the end of the year. Using and working with the CDs allows us to gradually improve them and resolve overlaps between different areas (Family Medicine, Physicianship, progress within EPAs).

3.2.2. Multiple Assessment Methods

Information that help students to develop their competencies is the main criterion in choosing the appropriate assessment format [32]. The formative approach provides latitude in planning a variety of assessment formats well beyond the psychometric discourse, considering their informative role and use [33]. In addition to assessments linked to curriculum activities, we administer a progress test three times per year to track progress toward the expected final level. Students are also invited to add personal reflections, specific feedback, and assessment of their optional excellence project (personal extracurricular training, such as an ultrasound certificate and anthroposophical medicine).
Overall, these assessment activities provide data points that paint a “thick description” of students’ learning experiences (see Figure 4). Students are expected to use and reflect on these data points in their end-of-year reflective essay for promotion decisions.

3.2.3. Mapping of the Assessments

To keep up with the complexity and diversity of the assessment landscape, it became important to keep track of how well the different CDs are represented in our program. We mapped the different assessment activities (see Table 1) to help us better manage and communicate about all assessment activities [34].

3.2.4. Learning Advisor (Mentoring System)

Each student is assigned a personal learning advisor (LA) for the duration of their master’s degree program. The LA supports the student’s learning progress and judgment by discussing the student’s data points, feedback, and experiences, encourages a reflective approach, and identifies potential learning difficulties. To this end, the LA has access to the student’s electronic portfolio (ePortfolio) and meets with each of their student at least 2–3 times per year, for approximately one hour, to discuss the learning report. The LA is not involved in student promotion decisions to ensure a genuine formative and supportive relationship. The profile of LA is an experienced clinician (Advanced Chief Resident or Attending) teaching in MMed. They must attend a 1-day training session before starting to supervise the four assigned students. The LA coordinator organizes regular meetings with the LA, to discuss their experiences and to receive training on specific topics.

3.2.5. Learning Portfolio

We ask students to build and use their personal “comprehensive” portfolio [35]; they store and manage all their data points (assessment results, feedback, self-reflection) and other academic documents of interest (e.g., Point of care ultrasound certificate) in a central location (ePortfolio). The portfolio pursues three goals: (1) to support the student’s learning process by recording all data points, (2) to facilitate reflection and writing of learning reports by providing easy access to all data points collected, and (3) to serve as a dashboard to inform learning progress.
The portfolio is considered a personal space, and only the student and their LA have access to its contents. This means that the entire portfolio is not assessed, but only selected evidence that the student adds to the LR is known to the Student Progress Committee (SPC).

3.2.6. Learning Report

The students have to write seven reflective essays, named Learning Reports (LRs), during the MMed (three in Year 1, two in Year 2, two in Year 3) and discuss them with their LA. The LRs during the year are low stake and serve to monitor and promote learning as well as a preparation for the high-stake end-of-year LR that is submitted to the SPC. This dialogic (interactive exchange with LA) and two-step approach (low stakes and then high stakes) is an important aspect of establishing sustainable feedback practices [22]. We ask our students to reflect on their LRs on their progress across CDs and cite relevant data points from their portfolios to promote reflective practice and evaluative judgment. The requirements in the LRs are gradually increased to align them with the curriculum and the students’ expertise in writing the LR (see Table 2).
We provide additional scaffolding in the form of support and guidance for writing the LR. At the beginning of MMed, students participate in writing workshops to familiarize themselves with reflective writing. For each LR, they receive a template with expected chapters and character limits for each chapter; they also receive guiding questions to encourage reflection and a formatting guide, and this support decreases over the course of the academic years.

3.2.7. Reflection Weeks

We felt it necessary to offer protected time for reflection work to signal the importance attached to this competency. Five Reflection Weeks (three in Year 1, two in Year 2, and none in Year 3) with no scheduled class activities provide time for students to reflect on their learning based on the data points collected in their portfolio (formative assessment results, feedback, reflections). Students write their LRs during those weeks and discuss them during their mandatory meeting with their LA.

3.2.8. Promotion Decision

Our promotion decision supports and builds on the formative and reflective focus of MMed. It represents the keystone of the whole formative approach. The SPC evaluates end-of-year LRs and makes promotion decisions for students. Below, we describe how we ensure that promotion decisions are fair, rigorous, trusted, and fully aligned with the overall approach.
Student Progress Committee (SPC): The 10 members of the SPC are senior clinicians (senior physicians, department heads) with academic titles. Each SPC member receives specialized training. To reduce the workload of these 10 members and to enrich the perspective on the level expected to promote students, we are currently recruiting external reviewers to participate in the second- and third-year promotion cycle; they are senior clinicians at a teaching hospital who work with clerkship students (i.e., after second-year promotion) and first-year residents (i.e., after third-year promotion).
Evaluation process: The evaluation for each LR goes through a two-step process: (1) Separate evaluation of each competency area on the milestones (using a three-point scale: met, partially met, not met); (2) Evaluation of the student’s overall performance (using a three-point scale: promoted, conditionally promoted, not promoted) based on observation of trends and patterns, and potential risk for promotion (for the student and patients). Concurrent with the assessment, the evaluator prepares feedback highlighting the student’s strengths and areas for improvement for each CD. The SPC meets for an initial “calibration session” for each promotion cycle. All reviewers evaluate and discuss 2–3 selected end-of-year LRs, including at least one LR expected to be good and one with difficulties. After this shared experience, each SPC member reviews 3–5 LR individually. In case of concerns or hesitation, the first reviewer consults a second reviewer; if the two reviews differ, the SPC chair makes the third review. The SPC convenes for a final “deliberation session”, where each member presents and discusses their reviewed LR, and promotion is decided. The decision’s robustness and defensibility are enforced by documenting the whole process, thereby maintaining the thread of evidence up to the decision. Our Med-Exam Office supports the SPC administratively and conceptually.
Notification of results: A letter notifies the student of the final promotion decision and its rationale and provides narrative feedback for each of the CDs and overall feedback.
Quality Assurance: At the end of each promotion cycle, the SPC critically discusses the promotion process, milestones, curriculum, and program. Deficiencies and requests for improvement are articulated to the appropriate entities (e.g., Med-Exam Office, Curriculum Committee, clinical rotations).
Remediation process: When a student is promoted conditionally or not promoted, personalized formal remediation starts. In the case of a conditional promotion, the student can pursue study into the upper-year but must demonstrate specific improvements in a given period [36]. In the first step of remediation, the student is asked to submit a remediation plan to the SPC describing how they will correct their deficiencies, and the proposal is validated with or without modifications. The student continues their studies and must submit a remediation report by the agreed-upon deadline. The SPC evaluates the LR and decides on promotion. In cases of non-promotion, the student will not be allowed to continue their studies; likewise, a remediation plan must be submitted and approved, and this plan determines whether the student will repeat the entire year and be evaluated based on the regular LR or if they will complete a personalized program, in which case a remediation report must be submitted. Remediation can also be triggered before the end-of-year high-stake evaluation. If an LA identifies significant difficulties for one of their students, and no appropriate solution can be found within the regular program, the LA reports the case to the curriculum committee chair. If a remediation plan seems appropriate, the chair of the curriculum committee initiates the remediation process with the SPC.

3.2.9. Anticipating the Federal Licensing Exam

Our program aims to prepare our students for residency and the Federal Licensing Exam (FLE). While PROFILES ensures general curriculum alignment with the FLE, we have had to adjust to the focused assessment formats of the FLE (MCQ and OSCE) that do not cover the broader competency areas. We made sure that our students were adequately exposed to these two formats; we also ensured that our Progress Test questions matched the format and level of difficulty of the FLE MCQ (clinical vignette). In addition, we gave our students access to a commercial medical learning platform for self-testing, which is widely used in other medical schools for FLE preparation. Finally, we planned a similar review and training period (with mock MCQ and OSCE), as offered by other Swiss medical schools.

4. Implementing ASF

“Building the bridge as you walk on it”
Robert E. Quinn

4.1. How We Started

We learned of the principles of PA in the literature and attended international conference workshops on PA; however, it was not until we considered PA as the basis for our ASF that we gradually explored the depth and breadth of PA. We first deepened our knowledge of the literature on PA [32,37], and then gradually expanded it to include feedback and evaluative assessment [7,20], formative assessment [12], and Four Component Instruction Design (4C/ID) [24]; however, before deciding on PA, we felt it was important to see such a program live, as we needed to speak with stakeholders at a school that has implemented PA. The CCLCM program [11,38] was particularly compelling because it shares four key characteristics with the Master of Fribourg: they created their program from scratch, they have a small cohort of students (40 versus 32), their program is competency-based (PROFILES versus ACGME—Accreditation Council for Graduate Medical Education—core competencies), and their students have a similar national licensing exam (FLE versus USMLE—United States Medical Licensure Exam). A visit to CCLCM deepened our understanding of PA and gave us the confidence and approach we needed to commit; after that, there was no turning back. With 17 months to go before the first students arrived, we had to build the entire program and PA, while deepening and gradually understanding it.

4.2. The Change of Perspective

Coming from the traditional “assessment game”, approaching and understanding the whole formative approach needs a complete reorientation, a paradigm shift, and a maturing period. The old habits and certainties must be deconstructed, leading to moments of uncertainty and disorientation before finding one’s way in the new reality. Immersing ourselves in the literature, confronting and discussing the new concepts, and exploring the resulting discomfort helped us move from a static, fragmented, and dualistic formative-summative view of assessment to a dynamic, longitudinal view of the assessment system. Reliance on student motivation and ability is evolving from a behaviorist mentality (think operant conditioning of assessment results) to a constructivist vision (think the importance of context and interactions) [39] and a student-centered humanistic approach (think the development of a learner as a human being with needs, aspirations, and autonomy) [40,41]. The energy and effort for such a journey can (and must) be expected from the core group that designs and implements the MMed.
What contributed to this change of perspective on assessment was accepting the competency-based concept. Once knowledge and skills are not the sole focus of attention for learning and teaching and are integrated into the richness and complexity of competency, the perspective of robust formative continuous assessment makes full sense.

4.3. The Question of Resources

The budget for the implementation and operation of the MMed and the limits of the performance agreements with the teaching hospitals (we fund all clinical activities related to MMed) were calculated with a traditional program in mind. The idea to implement ASF came after the budget decision with the hiring of the new MMed team, and we had to make do with the planned resources. We were able to reduce the burden of summative exams [42]. Creating psychometrically performing MCQs with a high renewal rate of questions for a cohort of 40 students would have cost us a lot of time and faculty resources; we also had to shorten the originally planned duration of the clinical rotation from 40 to 30 weeks to align with clinical training capacity, freeing up resources for teaching and ASF. The scheduling of clinical supervision was appropriate. The additional burden of ASF resulted largely from the LA program and SPC, from the more demanding end-of-module evaluations, and from an increased faculty and administrative activity for the challenging promotion process.
Resource planning is one aspect. Faculty involvement is another, and is sometimes a limiting factor. Many of our faculty used to teach selectively but had no experience in managing an entire curriculum; they were unaware of the many roles and responsibilities they had with the MMed (e.g., curriculum commission, assessment quality control, student counseling, and evaluation of students’ performance in clinical teachings). New teachers had to learn the ropes and invest in a whole new set of activities that did not have much to do with their idea of teaching: Regular governance activities (curriculum committee, evaluation committee, coordination meetings, evaluation meetings, participation in FLE committees); the requirements of competency-based curriculum (longitudinal planning and mapping of competencies, evaluation in training, evaluation of clinical rotations, maintaining a portfolio); and the specifics associated with ASF (mentoring time of LA and assessing end-of-year LRs in SPC, rather than relying on an easy-to-understand average of a few exams for promotion decisions); and understanding the concepts of CBME and ASF.

4.4. The Importance of the Narrative

“Narrative is the form humans use to make sense of events and relationships.”
Gillie Bolton
As expected, PA with its many facets was unknown to the faculty. We needed to develop a narrative to convey the complexity of this new audit approach, to convince us to invest time and attention in more than just knowledge and presence. For students, our primary attention was to make sure they were not dazzled by the “no grades” narrative and see it as an easy track. Some of the narratives we found helpful when discussing ASF with the different stakeholders are discussed hereafter.
The Preparation for Postgraduate narrative: In Swiss residency training, there are no end-of-semester or end-of-year exams, no grades; but there is plenty of formal and informal feedback from which to learn. Students need to know what and where they need to improve, based on internal and external feedback. PA brings the learning behaviors expected in postgraduate education to undergraduates with additional support, and this narrows the learning behavior gap for graduates entering residency training.
The Patient Care narrative [37]: In patient care, clinicians base their judgment on quantitative data (lab values), non-numerical observational data (imagery), narrative information (anamnesis, observation from other care providers), reflection on all these data, and clinical experience (expert judgment). There is a contextual weighting of these different pieces of information. The same is true for ASF, where student promotion is based on numerous data points of varying sources and quality.
The Verdict narrative: The promotion decision is based on interpreting facts, clues, and evidence. A thread of evidence and clear rules (milestones) are required. The final decision/verdict needs to be commented on and explained.
The Accreditation narrative: ASF is similar to the accreditation process that medical schools must undergo. Students write a self-evaluation report (the LR) referring to specific standards (the CDs and milestones). A panel of external experts (the SPC) reviews the self-evaluation report, decides if expectations are met, and communicates conditions and/or recommendations for improvement.
The Good Old Times but Safer narrative: Prior to 2011, the undergraduate program in Switzerland had no exams in years 4 and 5, which are equivalent to master’s years 1 and 2, respectively, and year 6 was dedicated to preparing for the licensing exam. The Bologna reform replaced it with multiple summative module exams. ASF brought back some of this thrust into student development without constant summative assessments, which many physicians in leading positions could relate to.
Two established narratives opposed the ASF approach: In the Knowledge vs. Competences narrative, knowledge is falsely perceived as neglected in favor of competences, which is considered a wobbly hype. In the Only Summative Assessment Drives Learning narrative, summative assessment is seen as the only way to motivate students to learn seriously. In our experience, both narratives seemed to correlate and hindered the adoption of ASF. Unsurprisingly, individuals who adhere to a competency-based narrative were convinced by ASF.

4.5. Faculty Development

In addition to the specific workshops for LA and SPC, from 2019 to 2021 we have trained over 330 participants (mainly from teaching hospitals) in 26 workshops (from principles of medical education to program-specific workshops on giving feedback, writing MCQs, teaching in the clinical setting, etc.). Program presentation and ASF principles were woven into the various workshops; over time, we emphasized the topic of CBME.

4.6. Continuous Improvement

It is only fair that we adhere to the same reflective practice that we require from our students. Over the past four years, we underwent an annual external evaluation in the form of guest experts, mandatory accreditation, and an application for an award. Writing reports while responding to questions and meeting standards have and continues to provide us with many opportunities to deepen our understanding of ASF. Sustained critical reflections help us continuously improve ASF. Consistency of our CDs and milestones, aggregation and visualization of data in ePortfolio, continuous training of our LAs, and workload for students and faculty are some of our current challenges.

5. Discussion

“Think big, start small, adjust frequently.”
anonymous
This case study report shares our experience of implementing PA at a Swiss public university. Four years ago, we began to plan our new 3-year competency-based undergraduate MMed around PROFILES [2] and goals set by the State Council of Fribourg to promote family medicine. We chose PA for our assessment system to align with the competency-based curriculum. Our cohort size was 40 students, with the corresponding faculty size. We utilized initially planned resources for a traditional assessment system, which was possible by adjusting some activities.
Designing this new MMed from a clean slate was undoubtedly beneficial in the planning phase, as there were few legacies to be defended. This newness also meant having newly appointed senior faculty with limited experience in teaching, especially in running and governing an entire program; this inexperience presented a liability in the implementation phase when faculty needed to mobilize for all non-student-contact teaching activities. At that point, ASF was seen as unnecessarily complex and diverting from teaching. The systemic nature of PA makes it intrinsically more complex; however, the novelty of the approach indeed magnified the resistance, and for some, any governance activities were perceived as unnecessary overhead. Nevertheless, the complexity of ASF proved to be highly resilient to the significant constraints that COVID-19 imposed upon us during the implementation years.
Another facet was accepting assessment as more than a measuring instrument of students’ past learning efforts. In our observation, faculty members that perceived competencies as antagonistic to knowledge acquisition expressed more difficulties in accepting the formative nature of assessment and the central role of feedback; their focus was more on teaching and less on learning. As our students must take FLEs that are mandatory for all Swiss graduates and benchmark all medical schools, there is greater pressure on teaching.
We did not conduct an in-depth exploration of the students’ experience of ASF. Nevertheless, our observation revealed that the first semester was a tough and sometimes painful transition to a new self-responsibility for students. Some students thrived, others struggled, and many adapted. In the future, we require a better understanding of the type of students that benefit most from ASF to admit the right students into the MMed.

6. Conclusions

Our first cohort of 40 students graduated and took the FLE in the Summer of 2022. We are entering a consolidation phase that includes many necessary improvements while maintaining the high priority of the program’s sustainability. On the faculty side, we need to ensure commitment to the non-traditional teacher roles to support student learning through feedback, aggregation of feedback, and meaningful promotion decisions. On the student side, our challenge is to maintain our first cohorts’ dynamism and engagement over the years and better profile students who would benefit from this formative environment.

Author Contributions

Conceptualization, R.B., E.B. and M.M.; Funding acquisition, I.C.; Methodology, R.B., P.-A.B. and S.M.; Project administration, A.G. and I.C.; Writing—original draft, R.B.; Writing—review and editing, R.B., E.B., A.G., S.M., I.C. and M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to thank all the students for their trust, patience, and constructive criticism. We are also grateful to all the faculty members for their openness and confidence. We extend a special thank you to the late Elein Danfer, Alan Hull, and Beth Bierer at CCLCM, Sylvia Heeneman and Suzann Schut at the University of Maastricht, and the late Harold Lyon—guest Fulbright Scholar.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bonvin, R.; Nendaz, M.; Frey, P.; Schnabel, K.P.; Huwendiek, S.; Schirlo, C. Looking Back: Twenty Years of Reforming Undergraduate Medical Training and Curriculum Frameworks in Switzerland. GMS J. Med. Edu. 2019, 36, Doc64. [Google Scholar]
  2. Profiles Working Group. Profiles—Principal Relevant Objectives and Framework for Integrative Learning and Education in Switzerland; Joint Commission of the Swiss Medical Schools: Zurich, Switzerland, 2017. [Google Scholar]
  3. Michaud, P.A.; Jucker-Kupper, P. The Profiles working group. The “profiles” Document: A Modern Revision of the Objectives of Undergraduate Medical Studies in Switzerland. Swiss Med. Wkly. 2016, 146, w14270. [Google Scholar] [PubMed] [Green Version]
  4. Federal Statistical Office. Number and Density of Doctors, Dental Practices and Pharmacies, By Canton; Federal Statistical Office: Neuchâtel, Switzerland, 2021. [Google Scholar]
  5. Sohrmann, M.; Berendonk, C.; Nendaz, M.; Bonvin, R. Nationwide Introduction of a New Competency Framework for Undergraduate Medical Curricula: A Collaborative Approach. Swiss Med. Wkly 2020, 150, w20201. [Google Scholar] [PubMed]
  6. Van Melle, E.; Frank, J.R.; Holmboe, E.S.; Dagnone, D.; Stockley, D.; Sherbino, J. Competency-based Medical Education Collaborators International. A Core Components Framework for Evaluating Implementation of Competency-Based Medical Education Programs. Acad. Med. 2019, 94, 1002–1009. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Boud, D.; Ajjawi, R.; Tai, J.; Dawson, P. Developing Evaluative Judgement in Higher Education; Routledge: London, UK; New York, NY, USA, 2018. [Google Scholar]
  8. Pfarrwaller, E.; Audétat, M.-C.; Sommer, J.; Maisonneuve, H.; Bischoff, T.; Nendaz, M.; Baroffio, A.; Perron, N.J.; Haller, D.M. An Expanded Conceptual Framework of Medical Students’ Primary Care Career Choice. Acad. Med. 2017, 92, 1536–1542. [Google Scholar] [CrossRef] [PubMed]
  9. Wu, L.; Wang, D.; Evans, J.A. Large Teams Develop and Small Teams Disrupt Science and Technology. Nature 2019, 566, 378–382. [Google Scholar] [CrossRef]
  10. Schuwirth, L.W.T.; van der Vleuten, C.P.M. How ‘Testing’ Has Become ‘Programmatic Assessment for Learning’. Health Prof. Educ. 2019, 5, 177–184. [Google Scholar] [CrossRef]
  11. Dannefer, E.F.; Henson, L.C. The Portfolio Approach to Competency-Based Assessment at the Cleveland Clinic Lerner College of Medicine. Acad. Med. J. Ass. Amer. Med. Coll. 2007, 82, 493–502. [Google Scholar] [CrossRef]
  12. Black, P.; Wiliam, D. Developing the Theory of Formative Assessment. Educ. Asse. Eval. Acc. 2009, 21, 5–31. [Google Scholar] [CrossRef] [Green Version]
  13. Dochy, F.; Segers, M.; Gijbels, D.; Struyven, K. Assessment Engineering: Breaking Down Barriers Between Teaching and Learning, and Assessment. In Rethinking Assessment in Higher Education: Learning for the Longer Term; Routledge: London, UK; New York, NY, USA, 2007; pp. 87–100. [Google Scholar]
  14. Entwistle, N.; McCune, V.; Walker, P. Conceptions, Styles, and Approaches Within Higher Education: Analytic Abstractions and Everyday Experience. In Perspectives on Thinking, Learning and Cognitive Styles; Routledge: London, UK; New York, NY, USA, 2014; pp. 103–136. [Google Scholar]
  15. Cilliers, F.J.; Schuwirth, L.W.T.; Herman, N.; Adendorff, H.J.; van der Vleuten, C.P.M. A Model of the Pre-Assessment Learning Effects of Summative Assessment in Medical Education. Adv. Health Sci. Edu. 2012, 17, 39–53. [Google Scholar] [CrossRef] [Green Version]
  16. Shute, V.J. Focus on Formative Feedback. Rev. Educ. Res. 2008, 78, 153–189. [Google Scholar] [CrossRef]
  17. Butler, R. Enhancing and Undermining Intrinsic Motivation: The Effects of Task-involving and Ego-involving Evaluation on Interest and Performance. Brit. J. Educ. Psych. 1988, 58, 1–14. [Google Scholar] [CrossRef]
  18. Schut, S.; Driessen, E.; van Tartwijk, J.; van der Vleuten, C.P.M.; Heeneman, S. Stakes in the Eye of the Beholder: An International Study of Learners’ Perceptions within Programmatic Assessment. Med. Educ. 2018, 52, 654–663. [Google Scholar] [CrossRef] [PubMed]
  19. Harrison, C.J.; Könings, K.D.; Molyneux, A.; Schuwirth, L.W.T.; Wass, V.; van der Vleuten, C.P.M. Web-Based Feedback After Summative Assessment: How Do Students Engage? Med. Educ. 2013, 47, 734–744. [Google Scholar] [CrossRef] [PubMed]
  20. Boud, D.; Molloy, E. Feedback in Higher and Professional Education: Understanding it and Doing It Well; Routledge: London, UK; New York, NY, USA, 2013. [Google Scholar]
  21. Boud, D.; Molloy, E. Rethinking Models of Feedback for Learning: The Challenge of Design. Assess. Eval. Higher Edu. 2013, 38, 698–712. [Google Scholar] [CrossRef]
  22. Carless, D.; Salter, D.; Yang, M.; Lam, J. Developing Sustainable Feedback Practices. Stud. Higher Edu. 2011, 36, 395–407. [Google Scholar] [CrossRef] [Green Version]
  23. Hounsell, D. Towards More Sustainable Feedback to Students. In Rethinking Assessment in Higher Education: Learning for the Longer Term; Boud, D., Falchikov, N., Eds.; Routledge: London, UK; New York, NY, USA, 2007; pp. 87–100. [Google Scholar]
  24. Van Merriënboer, J.G.; Kirschner, P.A. Ten Steps to Complex. Learning: A Systematic Approach to Four-Component Instructional Design; Routledge: London, UK; New York, NY, USA, 2017. [Google Scholar]
  25. Leach, D.C. Competence is a Habit. JAMA 2002, 287, 243–244. [Google Scholar] [CrossRef] [PubMed]
  26. Epstein, R.M. Defining and Assessing Professional Competence. JAMA 2002, 287, 226. [Google Scholar] [CrossRef]
  27. Sandars, J. The Use of Reflection in Medical Education: Amee Guide No. 44. Med. Teach. 2009, 31, 685–695. [Google Scholar] [CrossRef]
  28. Whitehead, C.; Selleger, V.; van de Kreeke, J.; Hodges, B. The ‘Missing Person’ in Roles-Based Competency Models: A Historical, Cross-National, Contrastive Case Study. Med. Educ. 2014, 48, 785–795. [Google Scholar] [CrossRef]
  29. Raelin, J.A. Toward an Epistemology of Practice. Acad. Manag. Learn. Edu 2007, 6, 495–519. [Google Scholar] [CrossRef]
  30. Schon, D.A. The Reflective Practitioner: How Professionals Think in Action; Basic Books: New York, NY, USA, 1984. [Google Scholar]
  31. Frenk, J.; Chen, L.; Bhutta, Z.; Cohen, J.; Crisp, N.; Evans, T.; Fineberg, H.; Garcia, P.; Ke, Y.; Kelley, P.; et al. Health Professionals for a New Century: Transforming Education to Strengthen Health Systems in an Interdependent World. Lancet 2010, 376, 1923–1958. [Google Scholar] [CrossRef] [Green Version]
  32. Van Der Vleuten, C.P.M.; Currie, E. The Assessment of Professional Competence: Building Blocks for Theory Development. Best Pract. Res. Clin. Obstet. Gynaecol. 2010, 24, 703–719. [Google Scholar] [CrossRef] [PubMed]
  33. Hodges, B. Medical Education and the Maintenance of Incompetence. Med. Teach. 2006, 28, 690–696. [Google Scholar] [CrossRef]
  34. Harden, R.M. Amee Guide No. 21: Curriculum Mapping: A Tool for Transparent and Authentic Teaching and Learning. Med. Teach. 2001, 23, 123–137. [Google Scholar] [CrossRef]
  35. Driessen, E. Do Portfolios Have a Future? Adv. Health Sci. Educ. Theory Pract. 2017, 22, 221–228. [Google Scholar] [CrossRef] [Green Version]
  36. Bierer, S.B.; Dannefer, E.F.; Tetzlaff, J.E. Time to Loosen the Apron Strings: Cohort-Based Evaluation of a Learner-Driven Remediation Model at One Medical School. J. Gen. Intern. Med. 2015, 30, 1339–1343. [Google Scholar] [CrossRef] [Green Version]
  37. Schuwirth, L.; van der Vleuten, C.P.M.; Durning, S.J. What Programmatic Assessment in Medical Education Can Learn from Healthcare. Perspect. Med. Educ. 2017, 6, 211–215. [Google Scholar] [CrossRef] [Green Version]
  38. Dannefer, E.F. Beyond Assessment of Learning Toward Assessment for Learning: Educating Tomorrow’s Physicians. Med. Teach. 2013, 35, 560–563. [Google Scholar] [CrossRef]
  39. Govaerts, M.J.B.; van der Vleuten, C.P.M.; Schuwirth, L.W.T.; Muijtjens, A.M.M. Broadening Perspectives on Clinical Performance Assessment: Rethinking the Nature of in-Training Assessment. Adv. Health Sci. Educ. 2007, 12, 239–260. [Google Scholar] [CrossRef]
  40. Rogers, C.R.; Lyon, H.C.; Tausch, R. On Becoming an Effective Teacher: Person-Centred Teaching, Psychology, Philosophy and Dialogues with Carl, R. Rogers; Routledge: London, UK; New York, NY, USA, 2013. [Google Scholar]
  41. DeCarvalho, R.J. The Humanistic Paradigm in Education. Human Psychol. 1991, 19, 88–104. [Google Scholar] [CrossRef]
  42. Van der Vleuten, C.P.; Heeneman, S. On the Issue of Costs in Programmatic Assessment. Perspect. Med. Educ. 2016, 5, 303–307. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. The three control mechanisms of the Swiss Federal Act for undergraduate medical training.
Figure 1. The three control mechanisms of the Swiss Federal Act for undergraduate medical training.
Education 12 00425 g001
Figure 2. ASF’s four key concepts and nine program elements.
Figure 2. ASF’s four key concepts and nine program elements.
Education 12 00425 g002
Figure 3. MMed’s Competency Domains (based on Frenk et al. [31]).
Figure 3. MMed’s Competency Domains (based on Frenk et al. [31]).
Education 12 00425 g003
Figure 4. Overview of the assessment layers of ASF.
Figure 4. Overview of the assessment layers of ASF.
Education 12 00425 g004
Table 1. Example of mapping assessment activities to competency domains.
Table 1. Example of mapping assessment activities to competency domains.
Assessment SystemCompetency Domains
Phase 2.2—Clinical Immersion & Focus modulesCD 1
Medic. Expert.
CD 2
Family Med.
CD 3 PhysicianshipCD 4
Interact & collabor.
CD 5 Learning teachingCD 6 Comm. HealthCD 7
EPA Progress
CD 8
Reflective practice
Clinical immersion (Clinical rotations and longitudinal clerkship in family medicine)
(a) Medium stake assessments
  • 3 x End-of-rotation Evaluation
++ +++ +++
  • 1 x end-term evaluation of longitudinal clerkship in family medicine
++++++++++++
  • Formative OSCE
+++ ++
(b) Low stake assessments
  • Workplace-based assessments (3 x/week)
++ ++++++++ +++
  • End-of-rotation knowledge test
++
  • Feedbacks from peers, health professionals, patients
++++++ +++++
  • Self-evaluation, reflection
(+)(+)(+)(+)(+)(+)(+)++
Focus Modules (Focus Days, Focus Weeks)
(a) Medium stake assessments
  • 1 Thematic test (MCQ)
+++ +
(b) Low stake assessments
  • End-of-week quiz
+++
  • Feedbacks from peers, teachers
++++++ +++++
  • Self-evaluation, reflection
(+)(+)(+)(+)(+)(+)(+)++
Learning progress
(a) Medium stake assessments
  • 3 x Progress test
+++
(b) Low stake assessments
  • Formative Learning Report (LR 4)
+++++++ +++
“+” indicates the degree to which an assessment activity informs a given competency domain: (+) may inform, + some information, ++ good information, +++ rich information.
Table 2. Progressive build-up of the learning report requirements.
Table 2. Progressive build-up of the learning report requirements.
Competency Domain (CD)
CD 1CD 2CD 3CD 4CD 5CD 6CD 7CD 8
Learning Report (LR)Medical expert.Family med.Physician -shipInteract and
collab.
Learning teachingComm. HealthEPA progressReflective practice
Year 1
LR 1formativeX XX X
LR 2formativeX XStudents report on their progress in 1 of these 3 CDs of their choice2 of the 9 EPAsX
LR 3end-of-yearXXXStudents report on their progress in 1 of these CDs of their choice3 of the 9 EPAsX
Year 2
LR 4formativeX Students report on their progress for 2 of these CDs, of their choice3 of the 9 EPAsX
LR 5end-of-yearXXXXXXXX
Year 3
LR 6formative (optional)(X)(X)(X)(X)(X)(X)XX
LR 7end-of-year(X)(X)(X)(X)(X)(X)XX
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bonvin, R.; Bayha, E.; Gremaud, A.; Blanc, P.-A.; Morand, S.; Charrière, I.; Mancinetti, M. Taking the Big Leap: A Case Study on Implementing Programmatic Assessment in an Undergraduate Medical Program. Educ. Sci. 2022, 12, 425. https://doi.org/10.3390/educsci12070425

AMA Style

Bonvin R, Bayha E, Gremaud A, Blanc P-A, Morand S, Charrière I, Mancinetti M. Taking the Big Leap: A Case Study on Implementing Programmatic Assessment in an Undergraduate Medical Program. Education Sciences. 2022; 12(7):425. https://doi.org/10.3390/educsci12070425

Chicago/Turabian Style

Bonvin, Raphaël, Elke Bayha, Amélie Gremaud, Pierre-Alain Blanc, Sabine Morand, Isabelle Charrière, and Marco Mancinetti. 2022. "Taking the Big Leap: A Case Study on Implementing Programmatic Assessment in an Undergraduate Medical Program" Education Sciences 12, no. 7: 425. https://doi.org/10.3390/educsci12070425

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop