The need to develop the body of knowledge in performance evaluation of e-learning in medical education is quite daunting to explain. E-learning has ceased to be an option in higher education institutions due to the advent and spread of COVID-19. During the hard lockdown, physical and social distancing was adopted as measures to decrease the transmission of the virus in dense populations such as higher education institutions [
1]. This disrupted the teaching-learning processes in more than 90% of the world’s population of students in the education system [
2]. As a result, many universities took desperate measures and suddenly transitioned from traditional face-to-face learning to e-learning [
3], a process termed Emergency Remote Learning (ERL) rather than e-learning. This solution is stated by the literature as an “imperfect yet quick solution to the crises” [
2] and could impair student performance.
It is recommended to follow a systemic approach when adopting e-learning in Medical Education, the first step being the assessment of needs and thorough requirement engineering [
4]. However, the readiness index for e-learning implementation in many universities is low. This is evident in the lack of human resources, technology infrastructure, learning management systems, and student support structures. The lack of preparation in the implementation of e-learning causes several challenges [
5] and calls for cross-disciplinary research into the development of evaluation models for e-learning [
6], especially in the medical education domain.
Concomitantly, the effect of COVID-19 cannot be discussed without mentioning the ripple effect of the pandemic in the medical domain. The pandemic necessitated an increased demand for medical practitioners to manage the burden of diseases and, consequently, a growing need to strengthen the capacity of healthcare professionals. Although e-learning is stated to be effective in enhancing the capacity of healthcare workers, the lack of a systematic approach to the design, monitoring, and evaluation of e-learning makes its impact on medical education highly debatable.
Very little research has been conducted to determine the key factors that impact student performance or to design a model for student performance evaluation in e-LMED. However, in the broader e-learning context, a handful of studies have been conducted on evaluating e-learning [
6]. These studies used different techniques, methods, and approaches for assessing students’ performance [
7,
8,
9,
10,
11]; therefore, the ubiquity of publications has resulted in the further divergence of the body of knowledge in this domain. This fact is reinforced in the systematic review conducted by De Leeuw, De Soet [
12]. The study reported that the evaluation of performance in e-learning is very complex due to the assortment of e-learning methods and the diverse approaches to carry out such evaluation correctly. The study further articulated that the domain is yet to achieve any form of consensus about which indicators to evaluate and calls for further studies to develop an evaluation tool that is properly constructed, validated, and tested. This is perceived as a firm footing for researchers to compare their findings on e-learning performance evaluation and for continuous improvement of the body of knowledge in the domain.
This study explores the grey area identified above by pooling together the published studies, aiming to create a convergence of the body of knowledge in the performance evaluation of e-learning in medical education. Hence, the following questions are posed:
The Population, Intervention, Comparison, Results, and Study Design (PICOS) model were used to identify publications relevant to this study’s aim on the SCOPUS database. Scopus was used because it is 100% inclusive of MEDLINE and has a more significant number of indexed journals than other databases. Also, SCOPUS has many functions that can be leveraged to facilitate citation analysis, counting research collaboration, and exporting data to Microsoft Excel for further tabulation and mapping. The bibliometric analysis method was used to analyze the retrieved documents.
This paper presents a brief contextual background to our study by exploring the definition of e-learning and the methods, models, and theories used to evaluate performance in e-learning in medical education. The next part of the paper presents our method and study design and then the results of our analysis. We conclude the paper with the discussion and conclusion sessions where we interpret the results and highlight the limitation and make recommendations for future research.
Short Contextual Background of the Study
The discussion about the benefits of e-learning in medical education has been ongoing before the advent of the COVID-19 pandemic. From different perspectives, proponents of e-learning have stated the potential benefits of e-LMED. Sears et al. [
13] opined that individuals involved in e-learning had a better ability to apply knowledge and skills and to retain learned concepts in a professional setting over a long period. E-learning allows medical students to study across borders at remote locations at their convenience while giving them access to a vast array of academic resources [
14]. The core benefit of e-learning is that it facilitates learning without taking Healthcare Professionals (HCPs) from their locations or working environment, as this could further burden the already burdened system [
15].
Web 2.0 induced a paradigm shift in e-learning ten years ago and enabled e-learning scenarios that were precursors to the current e-learning landscape. This is characterized by dynamic technological environments that allow users to create their own content and collaborate with other users [
16]. Thurzo, Stanko [
17] evaluated the effect of web 2.0 on dental education. The findings from the study revealed an increasing number of e-learning resources based on WEB 2.0 innovative technologies. Presently, the socialization of the internet with the advent of Web 4.0 and the anticipation of Web 5.0 with artificial intelligence capabilities [
18] will redefine the prospects of medical education. With these claims, e-learning presents as a powerful and timely pedagogical tool in the current COVID-19 context, introducing further resource and geographical constraints. Even though there are several works of literature investigating e-learning adoption and usage in medical education, these studies evaluate different performance constructs using various e-learning tools within different disciplinary contexts. No study brings all these variations into one literature so that the body of knowledge in this domain starts to grow.
E-learning is learning instructions delivered on digital devices such as desktop computers, laptops, tablets, or smartphones to support learning [
19]. E-learning is utilized in the modern-day teaching and learning process to support education, improve knowledge, advance performance, and improve students’ learning outcomes. For this study, we operationalize performance as the degree of efficiency and effectiveness with which a student carries out his assigned tasks. Efficiency in this context refers to obtaining results with limited resources; effectiveness is the achievement of the desired goals [
20].
On the surface, the evaluation of the performance of e-learning intervention seems to be concerned with just the technology, or at most, the technology, together with the task requirements for which the technology was adopted. However, other factors may influence student performance on an individual and organizational level. Several models have been developed which describe the effect of technology on performance, yet, the contradiction in the realization of the expected benefits of technology calls for a deeper understanding of this effect [
20]. Most research that evaluates performance focus mainly on a single component (technology, task, or individual); nevertheless, these studies do not provide factors that need to be considered, monitored, and evaluated for assessing students’ individual and organizational performance in medical education.
Also, the previous studies in the domain are conducted in different disciplines or departments in medical education; the e-learning tools and platforms used to facilitate learning are diverse, and so are the various outcomes reported. Ajenifuja and Adeliyi [
21] and Oluwadele [
22] assessed the influence of e-learning on the performance of healthcare professionals pre-covid. The study designed a hybrid framework by combining the Task-Technology fit model [
23] and the Kirkpatrick evaluation model [
24]. It postulates that when there is alignment between the learning tasks, technology infrastructures, individual characteristics, and contextual characteristics of students in medical education, their performance in e-learning will be optimized. Performance in this study was operationalized using the four constructs of the Kirkpatrick evaluation model—reaction, learning, behavior, and result. The study evaluated the performance of students who participated in an online antimicrobial stewardship and conservancy module hosted by the University of Kwazulu-Natal, South Africa, from four different perspectives according to the four constructs of Kirkpatrick evaluation model. Reaction (satisfaction with the module), Learning (scores in the module), Behavior (ability to apply learned concepts at work), and Result (the impact of knowledge transfer to practice in the workplace).
The result found that performance was enhanced both at the individual (result and learning) and organizational level (behavior and result) because the technological infrastructure provided to facilitate the module aligned with the task requirements of the module. Furthermore, participants affirmed that the lecturers and support team provided support to mitigate negative individual and contextual characteristics which could have hindered their performance. This includes a translator to moderate the language barrier non-English speaking students encounter and training for participants without experience in e-learning. Interestingly though, participants echoed that they did learn not only the content of the e-module but also acquired technical and research capabilities, which they have found to be even more helpful in their daily work practice.
Other researchers in the e-learning evaluation domain operationalize performance in different ways and use different models to evaluate performance in e-learning. For instance, Tautz, Sprenger [
25] operationalized e-learning performance with repetition and active learning in university classrooms and used the DeLone and McLean model to measure constructs such as quality of system use, perceived benefits, and student perspectives. Prasetyo, Ong [
26] operationalized e-learning performance as the acceptance of e-learning platforms. The study evaluates e-learning performance using the Extended Technology Acceptance Model (ETAM) and DeLone and McLean Information Systems Success Model. Constructs such as user interface, perceived ease of use, perceived usefulness, information quality, system quality, behavioral intentions, and actual usage of the system were measured.
Mastan, Sensuse [
6] reported on the models used by researchers for e-learning evaluation. This includes the 5 Dimension Evaluation Model [
27], the Kirkpatrick Model [
28], the System Usability Model [
29,
30], the Technology Acceptance Model (TAM) [
30,
31,
32], Swot Analysis [
31], the Balanced Scorecard (BSC) [
33], Theory of Planned Behavior (TPB) [
31], Expectation Confirmation Model (ECM) [
31], Flow Theory [
31], and E-learning System Success Model [
34]. While these models have been used to evaluate e-learning platforms, systems, tools, and interventions, student performance or outcome evaluation is not the focus of most of these studies.
Studies evaluating e-learning performance in medical education are diverse, and there seems to be no point of convergence on the constructs to be assessed so that performance is understood. Besides, medical education is a complex domain that does not yield itself easily to the adoption and utilization of e-learning as much as other domains. Therefore, this study fills the gap by accessing the published literature on performance evaluation in e-LMED to quantify the literature, identify the key terms and understand the extent of research collaboration in performance evaluation in e-learning in medical education.