Next Article in Journal
Study on Water Quality Change Trend and Its Influencing Factors from 2001 to 2021 in Zuli River Basin in the Northwestern Part of the Loess Plateau, China
Previous Article in Journal
A Study on the Visual Communication and Development of Green Design: A Cross-Database Prediction Analysis from 1972 to 2022
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development and Validation of an Evaluation Toolkit to Appraise eLearning Courses in Higher Education: A Pilot Study

1
Department of Nursing, Faculty of Health Sciences, University of Primorska, Polje 42, 6310 Izola, Slovenia
2
Department of Nursing, Tal-Qroqq University of Malta, 2080 Msida, Malta
3
Department of Neurosciences, Sapienza University of Rome, 00185 Rome, Italy
4
Department of Computer Science, University of Cyprus, Nicosia 1678, Cyprus
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(8), 6361; https://doi.org/10.3390/su15086361
Submission received: 4 February 2023 / Revised: 17 March 2023 / Accepted: 3 April 2023 / Published: 7 April 2023
(This article belongs to the Topic Advances in Online and Distance Learning)

Abstract

:
The development and evaluation of eLearning approaches is a global trend in higher education today. This study aimed to develop a companion evaluation toolkit consisting of formative and summative assessment scales to evaluate academics’ experiences in designing, delivering, and evaluating eLearning. To test the psychometric properties of the companion evaluation toolkit, an instrument validation study was conducted. Items were created, then tested for content and face validity. A confirmatory factor analysis (n = 185 participants) for the summative assessment scale examined the underlying structure of the scale, while reliability was assessed using the Cronbach’s alpha coefficient. The results show that the model examined is consistent with the 3 factors (33 items) explaining a total of 62% of the variance. The results also show a high level of reliability for both the formative and summative scales that comprise the companion evaluation toolkit. The results of this study can be used and welcomed by both teachers and professionals involved in the development and use of learning management systems or in the design, delivery, and evaluation of the eLearning process.

1. Introduction

In line with global trends and developments, Europe is facing major challenges, such as climate change, biodiversity loss, digital transformation, and an ageing population Such challenges are being addressed in a post-pandemic context. The COVID-19 pandemic it is widely recognised as the worst health crisis of this century. In turn, Europe’s position in the global context, the prosperity of its citizens and the well-being of its future generations depend on how Europe responds to these situations. This response weighs heavily on the higher education sector. The higher education sector in European countries performs an important role in Europe’s collective recovery from the pandemic. Learning from what has been experienced, reading correctly the current resulting context, and developing sustainable solutions and change which effectively address the new prevalent and predicted realities relies on strong and efficient higher education systems [1]. Against this backdrop, the prominent importance of excellent and accessible universities as prerequisite and foundation for an open, democratic, just, and sustainable society and economic growth is accentuated. These, in turn, influence employment rates [2]. The green transition [3] and the digital transition [4] are increasingly advocated in Europe as the two key strategic actions to ensure future-proof education, research, and innovation in higher education. In this regard, it is of utmost importance to address the wide disparities in digital literacy across the European Union (EU) [1]. Students need to be equipped with knowledge and skills for the future, and universities and their teaching staff need to upgrade their digital skills to meet the challenges of future generations and society at large [5].
In 2019, the health crisis urged higher education institutions to provide appropriate infrastructure that would enable high-quality digital educational experiences through targeted development of effective interactive online teaching and student engagement tools, and appropriated training for academic staff [6]. A new teaching paradigm slowly began to emerge in all European countries, even in those where traditional teaching was still the predominant model. The emergence of this new teaching paradigm now means that educational institutions need to develop a targeted implementation strategy for eLearning and revise their quality assurance protocols by expanding them to include eLearning methods and focusing on appropriate inputs, processes, and outcomes [1].
According to OECD [7], eLearning is defined as the use of information and communication technologies in various educational processes to support and enhance learning in higher education institutions and includes the use of information and communication technologies to supplement traditional classrooms, online learning, or the combination of both. eLearning focuses on the learner and involves a system that is interactive, repetitive, self-determined, and adaptable. eLearning differs from traditional educational approaches in three aspects: asynchronous transmission (lack of temporal dimension), decentralisation (lack of spatial dimension), and interaction or communication that is mediated electronically [8]. However, online learning refers not only to online learning/teaching but also to the careful design of courses to improve learning performance and ensure a positive learning experience [9].
The use of digital technologies to support the educational process is important. This paper proposes that it is also of vital importance to purposefully plan and develop educational courses or programmes which inevitably require the use of digital technologies.
In 2019, the Erasmus+ funded Digital Education and Timely Solutions project (DIG-IT) brought together academic staff from five EU countries, Malta, Finland, Cyprus, Slovenia, and Italy, to collaboratively design and deliver a transnational online course aimed at training academic staff in the design, delivery, and evaluation of effective eLearning. At the same time, there was overwhelming pressure on administrators to provide an infrastructure that would offer targeted development of educational support, effective tools for active student engagement, and training for academic staff to ensure the quality of the educational experience [10]. With EU countries under pressure to offer digital education options, establishing an appropriate theoretical framework was a logical first step. The curricular framework serves as a credible quality standard and guide for the design, delivery, and evaluation of effective educational programmes that generate exceptional digital education experiences [11].
Assessment is a crucial element in education, both in terms of accreditation, and supporting learners, and evaluating the learning process so short comings can be addressed and improvements implemented. The aim of the study reported in this paper was to develop a companion evaluation toolkit to the EU Digital Education Quality Standard Framework, consisting of formative and summative assessment scales to evaluate academics’ experiences of designing, delivering, and evaluating eLearning. On this basis, the hypothesis of the study is: “The companion evaluation toolkit developed to assess academics’ experiences in designing, delivering, and evaluating eLearning has good preliminary measurement properties in terms of content validity, construct validity, and reliability.”

2. Materials and Methods

2.1. Scale Development Process

The scale development study was conducted in May–June 2021, following specific phases of the scale development and standardization process [12], namely, (a) creating an initial set of items for both scales and response scales, (b) assessing content and face validity, and (c) testing factor structure and summative scale reliability.

2.1.1. Scale Development and Generation of Items

First, the theoretical framework, entitled the European Union Digital Education Quality Standard Framework, was developed through a four-step process [10]. It served as a quality standard and guide for the design, delivery, and evaluation of effective eLearning, such as educational programmes, learning units, sessions, courses, and modules (link to interactive framework, http://project-digit.eu/index.php/digital-education-quality-standards/, (accessed on 3 February 2023)). Based on the literature review, this framework supports the studied dimensions of successful online action. These dimensions are content, delivery, support, structure, community, and outcomes with the explicit ultimate goal of organisational change. The EU Digital Education Quality Standard rests on three theoretical pillars: (a) Constructivism, which views learning as the result of activity and self-organisation and conceives of learning as an interpretive, recursive, nonlinear process in which learners actively interact with the social world; (b) Social constructivism, in which individuals create or construct meanings and knowledge through interpersonal, intersubjective interactions. Learning can thus be seen as a re-acculturation, a process in which one increasingly educates oneself in ways shared by the particular knowledge communities of which one wishes to become a member; and (c) Connectivism, which is a digital learning theory applicable to online networked environments. It attempts to explain the learning challenges posed to individuals and educational institutions by information networks in which an exponentially growing amount of information can be constantly accessed. The environments are complex and chaotic and cannot be divided into simplified parts or into a mechanical level, but should be considered as a whole and living organism [13,14,15,16]. Following the development of this theoretical framework, it was used as a quality standard to design, deliver and evaluate an online nine-module course to improve online teaching of academics and to achieve greater equity in online delivery and availability of flexible learning options for populations in different countries [10].
The first step in the development process of the Companion Evaluation Toolkit comprised an exercise whereby the variables and sub-variables of the EU Digital Education Quality Standard (Supplementary File S1), which focus on the design, delivery, and evaluation of e-learning in higher education [10], were configurated and listed as the draft of the list items of the tool. Through this exercise large pool of items was created and systematically selected to capture all content potentially relevant to the target construct. A pool of 22 items for the formative survey (temperature check) and 58 items for the summative survey were the results. The formative survey (temperature check) was designed to assess academics’ progress and understanding during the first half of the course. The overall goal of the “temperature check” was to attain feedback early so any necessary changes could be made immediately to improve the learning experience. In this way, the temperature check differs from the summative survey at the end of the course, designed to measure academics’ perception of the quality of the course design, delivery and evaluation so improvements can be made for the next course offering. The Companion Evaluation Toolkit was subsequently reviewed by five team members. To avoid potential misconceptions, this review focused on the dimensions of eLearning design experience, item wording, and quality. To assess the extent to which each item was valued by academics, they were asked to respond to each item on a 5-point Likert scale, with responses ranging from 1—strongly disagree to 5—strongly agree.

2.1.2. Evaluating the Content and Validity of Scales

This study used a method that incorporated empirical techniques to calculate the content validity index (CVI) [17]. Seven university teachers from five countries involved in the study (Malta, Finland, Cyprus, Slovenia, and Italy), four women and three men, all experts in eLearning, were asked to review and rate the items of both scales. The experts were provided with an online content validation form and clear instructions to facilitate the process. For the relevance scale, a 4-point Likert scale was used with the following response options: 1 = not relevant, 2 = somewhat relevant, 3 = fairly relevant, and 4 = very relevant. For clarity, a 3-point Likert scale was used: 1 = not clear, 2 = needs some revision, and 3 = very clear. Before calculating the CVI, the relevance score was recoded as 1 (relevance scale of 3 or 4) or 0 (relevance scale of 1 or 2) [18]. Acceptable CVI values should be at least 0.83, indicating good content validity. Finally, the experts evaluated the wording of items by critically reviewing them for comprehension and providing feedback to improve the form of each item.

2.1.3. Evaluation of the Construct Validity and Reliability of the Scale

The short formative survey (temperature check) was finalised with content validation, and confirmatory factor analysis was used to assess the construct validity of the summative assessment scale [19]. Construct validity tests whether the scale measures what it is supposed to measure, whether it provides the information the evaluation intends to obtain, and whether the variables are statistically significantly related. Therefore, the Kaiser–Meyer–Olkin index (KMO) and Bartlett’s test of sphericity were used to check the suitability of the data for factor analysis. KMO values between 0.8 and 1 indicate that the sample is adequate, and Bartlett’s test of sphericity has a statistical significance level (p < 0.05), indicating that it is suitable for factor analysis. In the data analysis, only variables with an eigenvalue of communalities greater than 0.4 were considered relevant [20].
The reliability of the scales was assessed by analysing their internal consistency and stability. The Cronbach’s alpha coefficient was used to assess the reliability and internal consistency of the scales for each factor. High reliability of the measures provide the investigator greater confidence that each indicator is consistently measuring the same factor. The threshold for acceptable reliability was 0.70 [20].

2.2. Measuring Academics’ Experiences of Designing, Delivering, and Evaluating eLearning

Sample

A convenience sampling method was used to recruit academics from five EU countries (Malta, Finland, Cyprus, Slovenia, and Italy) to participate in the online course and thus pilot the scales. The nine-module course was offered to 40 academics from each country. Both scales were administered to a total of 200 participants (due to its brevity, the formative scale was omitted from further psychometric evaluation). The sample size of 200 participants was consistent with Mundfrom’s [21] recommendation of a minimum of three respondents per item and the likelihood of receiving incomplete questionnaires. For each scale, 185 questionnaires were completed (92.5%). The formative survey (temperature check) was conducted after the fourth module, while the summative survey was conducted at the end of the nine-module course. Table 1 shows the demographic characteristics of the 185 participants. The academics were aged between 24 and 61, with a mean age of 43.29 (SD = 9.278). The majority of them were female (n = 132, 71.4%).

2.3. Ethical Considerations

The study was approved by the Scientific Research Commission of the Faculty of Health Sciences of the University of Primorska on 20 January 2021. All data were treated confidentially. Informed consent was obtained from all participants who were willing to engage in the study.

2.4. Data Collection

After receiving ethical approval, email invitations including a link to the formative and later summative questionnaires were sent to potential participants. The online questionnaires were accompanied by an explanation of the purpose and method of completion and made available via the open-source online survey application 1KA (https://www.1ka.si/d/en (accessed on 3 May 2021) in May 2021 for the formative survey (temperature check) and in June 2021 for the summative survey. As part of the data collection process, a database was created which contained participants’ responses without personal information to ensure anonymity. During analysis, data were accessible only to the principal investigator. All data were kept confidential. Informed consent was obtained from all participants who agreed to participate in the study.

2.5. Data Analysis

For statistical analysis, data were exported to IBM SPSS version 29.0 (SPSS Inc., Chicago, IL, USA). To determine the psychometric properties of the Companion Evaluation Toolkit, the following statistical analyses were performed: descriptive statistics, Cronbach’s alpha coefficient to determine internal consistency, confirmatory factor analysis to estimate factor structure, and Pearson correlation coefficient to determine correlation between variables. p-values below 0.05 were considered statistically significant.

3. Results

Following minor textual corrections to the text used in a small number of items, all items were retained in the formative survey after content validation (Table 2), whereas 15 problematic items were deleted due to their low CVI index in the summative survey (Supplementary File S2). The Cronbach’s alpha for the formative assessment scale (Temperature check) was 0.945, indicating a high level of internal consistency. This exercise was carried out to ensure terminological appropriateness of the items and correct spelling throughout the tool.

3.1. Construct Validity Assessment and Reliability of the Summative Assessment Scale

Confirmatory factor analysis (CFA) was conducted to assess the validity of the summative assessment scale. Analysis of the strength of partial correlation (how factors explain each other) between variables was confirmed by the Kaiser–Meyer–Olkin index and Bartlett’s test of sphericity (KMO = 0.807, Bartlett’s test of sphericity χ2 = 6480.274, df = 528, p = 0.000). These results suggested that data was adequate and appropriate to procced further with the reduction procedure.
An examination of the kurtosis and skewness statistics indicates that all items were reasonably normally distributed (−0.768–0.127), and thus, were acceptable for performing CFA. The initial CFA showed that all 43 items yielded an 11-factor solution for the summative assessment scale. A factor analysis with Promax rotation was then conducted. Small values indicate variables that do not fit the factor solution well and were removed from the analysis (10 of 43 items). In addition, factors with eigenvalues greater than one (1) were selected for further study.
Finally, 3 factors with 62% of the variance were extracted from the remaining 33 items. The scree plot in Figure 1 shows the number of factors.
Table 3 summarises the result of the rotated component matrix using Promax rotation. Using the loading criteria of 0.40 [20], 33 items showed strong factor loading ranging from 0.58 to 0.85. Factor 1, Content and Structure: Perspective on Learning Experience and Outcomes, consisted of 17 items with factor loadings ranging from 0.67 to 0.84 and accounted for 50.92% of the variance. Factor 2, Established Online Community and Support consisted of 10 items with factor loadings ranging from 0.58 to 0.85 and accounted for 6.12% of the variance. Factor 3, Delivery: An Overall Perspective, consisted of 6 items with factor loadings ranging from 0.65 to 0.83 and accounted for 5.20% of the variance. Thus, this three-factor solution represents the core subscales of the summative assessment scale (Table 3).

3.2. Construct Validity Assessment and Reliability of the Summative Assessment Scale

The correlations of the coefficients among the latent variables were found to be significant, ranging from 0.75 to 0.83 (with a median of 0.79). Among the three factors, high correlations ranging from 0.80 to 0.83 predominated for “Content and Structure: Perspective on Learning Experience and Outcomes” and “Established Online Community and Support” (correlations ranging from 0.75 to 0.83), followed by “Delivery: An Overall Perspective” (correlations ranging from 0.77 to 0.80) (Table 4). All correlations were statistically significant at p < 0.001.
The names of the scales were derived from the content of the variables and sub-variables from the EU Digital Education Quality Standard Framework.
Table 5 shows that arithmetic means for the Summative Assessment Scale and subscales ranged from 4.59 (“Content and Structure: Perspective on Learning Experience and Outcomes”) to 4.67 (“Delivery: An Overall Perspective”). All mean values were higher than the median, indicating a tendency to score positively on each subscale. The standard deviations range from 0.89 (“Delivery: An Overall Perspective”) to 0.95 (“Content and Structure: Perspective on Learning Experience and Outcomes”), indicating an adequate response variance. The Cronbach’s alpha for the three subscales ranging from 0.89 to 0.97 for the summative assessment scale, indicating a high level of internal consistency.

4. Discussion

While digitalization has brought tremendous benefits to our economy and society by increasing efficiency and productivity, and making our lives more convenient, it has also brought several challenges, such as the digital divide and the unjust impact on education. The transition from a predominantly traditional or teacher-centred pedagogy to a learner-centred pedagogy is a long and slow process, and depends on policy or teaching culture, and the interest of education sectors or universities to invest in digital technologies, and the knowledge and willingness of teachers to adopt the new teaching paradigm [22,23]. Therefore, teacher training is needed to ensure that teachers truly understand the basic concepts of eLearning so they can plan and implement eLearning strategies appropriate for 21st-century learners. An eLearning framework to addresses both the technological and pedagogical aspects of eLearning is desirable. Therefore, there is an urgent need to better prepare teachers to deliver online learning, using an eLearning framework as a quality standard to incorporate generational factors and the capabilities of today’s digital technology. This study aimed to develop and examine the Companion Evaluation Toolkit, consisting of formative and summative evaluation scales to assess academics’ experiences in designing, delivering, and evaluating eLearning based on the EU Digital Education Quality Standard (Supplementary File S1) [10,22]. Although many frameworks and models have been developed for digital education, they vary significantly in their components, focus, and purpose. The EU Digital Education Quality Standard Framework, on which the nine-module course and accompanying evaluation toolkit are based, provides solutions to address inequities in the production and delivery of digital education in the higher education. The resources created with it are open source and promote standardisation of infrastructure and online development across countries. Quality online design is the cornerstone of this process, as learning materials require extensive upfront planning and design that requires an investment of time, effort, specialised resources, and skills [10].
Content validity of the formative and summative assessment scales was established through a combination of variables and sub-variables from the EU Digital Education Quality Standard Framework and expert validation. The inclusion of seven experts from educational institutions in five countries (Malta, Finland, Cyprus, Slovenia, and Italy) provided a broader perspective on the tool. Based on this expert validation, the CVI met Polit and Beck’s [18] criterion for content validity.
Construct validity of the summative assessment scale was assessed using confirmatory factor analysis. Factor analysis was supported by Bartlett’s test of sphericity, while the calculated Kaiser–Meyer–Olkin measure of sampling adequacy indicated that the sample was appropriate for factor analysis [20]. After the factor analysis, the summative assessment scale was modified to remove items which were found to be weak or unrelated. In the end, all included items had a factor loading of >0.4 and accounted for 62% of the variance. The results of the factor analysis revealed three factors. The overall reliability and stability of the summative assessment scale and the individual factors proved to be very good. The validity of the scale was tested in several steps to examine the construct in more detail. The association between the factors was high. Therefore, we can conclude that the psychometric properties of the scales are suitable for formative and summative assessment scales. The Companion Evaluation Toolkit to evaluate academics’ experiences in designing, delivering, and evaluating eLearning is thus completed and ready for use and/or further analysis.
Assessment is an essential component of online education, especially when it comes to determining student learning outcomes. Creating assessments for online education, whether formative or summative, also requires applying constructivist learning principles to our shared understanding of the educational process and related goals. The purpose of conducting formative assessment is to determine student learning progress in the course [24]. This assessment strategy helps teachers identify students’ deficits and assess their knowledge [25]. This strategy also enables teachers to provide feedback to students on their performance. Summative assessment, on the other hand, is used to measure learners’ perceptions of course quality and their learning outcomes at the conclusion of the course. Formative assessment also assists with assessing students’ learning and acquired skills, in addition to providing feedback to improve the course for future iterations [26].
The “Content and Structure: Perspective on Learning Experience and Outcomes” factor refers to the content of an effective digital education programme, which must provide all the information learners need in order to achieve the desired learning outcomes. To be considered effective, the content must be relevant, well organised, and structured, supported by relevant resources, and based on theory and best practices. eContent and eLearning structures are effective when they improve students’ understanding and change their perspective on the content to be learned. eContent could be a package of study materials, but also multimedia demonstration links, simulations, detailed explanations, case studies, study assignments, and discussion forums that can stimulate learning. Since listening and reading alone do not impact cognitive learning and knowledge creation, this is enabled through increased student interaction with content designed to engage students in an online learning environment. Student interaction with content typically occurs when, after listening to a presentation on a topic, they attempt to solve given problems and engage in online discussions by thinking about and sharing their thoughts on the course content [27,28]. Over the past decade, the use of eContent in the eLearning environment has increased dramatically. The availability and growth of digital information requires the availability of efficient tools to assist students and faculty to effectively organise, integrate, and search within available eResources to achieve specific educational goals [29]. In addition, the content should be organised into concise sections which provide a comprehensive overview of the information needed to achieve the desired learning outcomes. The content should progress logically and build on previous information to provide the learner with a sense of pace and completeness. It should be presented objectively and in unbiased language appropriate to the learner’s level of understanding. The content can be divided into clear sections, such as introduction, presentation of information, exercises, interaction, conclusion, and take-home messages [30,31]. The estimated time required for each section should be clearly communicated, monitored, and adjusted as needed. Learning can be actively supported by rapid delivery of information. Conversely, studies show that poorly designed content can have a negative impact on learning [32].
The “Established Online Community and Support” factor focuses on establishing a collaborative learning environment that is inclusive and secure. The criteria of this factor relate to ongoing support and assistance throughout the learning process. Support is a key issue because eLearning cannot successfully achieve its goals without guidance and support [28]. Administrative and technical support should be freely available to both students and faculty to help them use and access the technology supporting the digital learning environment. Step-by-step instructional videos can be used to guide learners in downloading the necessary software or performing technical tasks. Face-to-face meetings or video conferencing can be used to help the learners who find using technology challenging [33]. Several studies advocate the importance of a sense of community in eLearning. Community can also result from learners’ shared knowledge in an online environment, and there is evidence that learner interaction in online courses can significantly contribute to learning success. Some researchers suggest that online discussions in asynchronous environments tend to encourage reticent individuals to participate more fully and feel less intimidated, and there is less time pressure than in face-to-face interactions [27,31,34].
The final factor, “Delivery: An Overall Perspective”, includes carefully designed instruction using intentionally selected technology to ensure usability and accessibility of the learning platform. Pedagogical strategies leverage technology to provide an engaging and interactive learning experience which provides learners with an opportunity for reflection and promotes teacher, social, and cognitive presence. The purposeful and relevant use of technology supports both teaching and active learning, and results in educational solutions that help meet future competency needs and provide learners with smooth, flexible, and convenient digital learning pathways [35,36,37,38]. Learning experiences must be appropriately designed to help competent teachers apply evidence-based pedagogical practices. Academic staff should incorporate a variety of learning strategies and activities to accommodate diverse learning styles and to enhance and optimise each student’s experience. In doing so, academic staff should use pedagogical frameworks specific to digital education to guide the design, delivery, and evaluation of learning encounters. Academic staff should be available in the learning environment to facilitate understanding of information, encourage independent learning, solicit feedback, engage learners in problem solving, and provide positive reinforcement [29,31,32].
Evaluation of eLearning design, delivery, and assessment is an important factor in digital learning. The results of previous studies have shown that this approach has an impact on the successful implementation of eLearning and is composed of learner assessment, infrastructure and learning environment assessment, learning process assessment, outcomes and perceptions, instructional material (content) assessment, feedback and formative and summative assessment, and achievement of objectives [39,40,41].
In addition to providing evidence of the validity of the Companion Evaluation Toolkit, the study demonstrated its satisfactory internal consistency, reflected in Cronbach’s alpha of 0.89 to 0.97 for the summative assessment scale and its subscales, and high correlation among their individual items. The formative assessment scale was also found to have high internal consistency, with a Cronbach’s alpha of 0.94. Future studies might consider adapting the accompanying assessment instrument for the purposes of student learning experiences in eLearning.
It appears that online education may soon be the leading teaching paradigm in higher education. Many educational institutions are attempting to integrate digital education programs with open access to resources and individualize the educational needs of all students. In addition, online learning can provide significant environmental benefits. By beginning to integrate digital learning tools into higher education, our environmental footprint can be reduced, leading to greater sustainability [3].

Limitations

Convenience sampling was used to recruit participants for this study, which may lead to potential biases, such as underrepresentation of different target groups within the sample. Future studies with a larger sample could provide additional support for the validity and reliability of the Companion Evaluation Toolkit. In addition, test–retest reliability should also be conducted in future iterations of the modular courses using the Companion Evaluation Toolkit. The cultural sensitivity of the toolkit also remains to be explored. The authors acknowledge the need for a cross-cultural testing of the toolkit and possibly the need for its adaptation in this regard. Moreover, further validation studies are needed to strengthen the evidence of its effectiveness.

5. Conclusions

This study tested the validity and reliability of a 33-item and 3-factor summative assessment scale, which, along with the formative assessment scale, provides a companion evaluation toolkit for evaluating academics’ experiences in designing eLearning. The results revealed significant reliability and validity. In view of the indicated need for the adoption of eLearning across the tertiary education sector and the creation of sustainable solutions and change driven by contemporary realities and fuelled by the recent pan-demic, this paper holds relevance to educational institutions and policy makers. Both the framework and the toolkit examined in this study are freely available on the project website (http://project-digit.eu, accessed on 18 January 2023) with the aim of providing a quality standard and support for all educators involved in the design, delivery, and evaluation of effective digital education.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/su15086361/s1, File S1: EU Digital Education Quality Standard Framework variables and sub-variables; File S2: rejected items.

Author Contributions

Conceptualisation and design, S.L., M.C., L.F., A.Y. and M.P.; methodology, S.L. and M.P.; data collection, S.L., M.C., L.F., A.Y. and M.P.; data curation, S.L.; writing—original draft preparation, S.L. and M.P.; writing—review and editing, S.L., M.C., L.F., A.Y. and M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Erasmus+ Programme of the European Union “Digital Education Initiatives and Timely Solutions” (grant number 2019-1-MT01-KA203-051171).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Commission for Scientific Research Work at the University of Primorska on 20 January 2021. All data were treated confidentially. Written informed consent was obtained from all participants who were willing to participate in the study. Participants were informed about the aims and objectives of the study, the possibility of feedback, and the option to withdraw from the study at any point. Participation was voluntary and anonymous.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data are available from the corresponding author upon reasonable request as the participants were assured that it would remain confidential.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. European Commission. Commission Communication on a European Strategy for Universities. European Education Area. Available online: https://education.ec.europa.eu/node/1900 (accessed on 5 February 2022).
  2. Publications Office of the European Union. Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions a European Strategy for Data, COM/2020/66 Final. Available online: http://op.europa.eu/en/publication-detail/-/publication/ac9cd214-53c6-11ea-aece-01aa75ed71a1/language-en (accessed on 14 September 2022).
  3. European Education Area. Learning for the Green Transition and Sustainable Development. Available online: https://education.ec.europa.eu/node/1561 (accessed on 25 November 2022).
  4. European Education Area. Digital Education Action Plan (2021–2027). Available online: https://education.ec.europa.eu/node/1518 (accessed on 25 November 2022).
  5. Falloon, G. From Digital Literacy to Digital Competence: The Teacher Digital Competency (TDC) Framework. Educ. Technol. Res. Dev. 2020, 68, 2449–2472. [Google Scholar] [CrossRef] [Green Version]
  6. Rapanta, C.; Botturi, L.; Goodyear, P.; Guàrdia, L.; Koole, M. Online University Teaching During and After the Covid-19 Crisis: Refocusing Teacher Presence and Learning Activity. Postdigital Sci. Educ. 2020, 2, 923–945. [Google Scholar] [CrossRef]
  7. OECD. E-Learning in Tertiary Education. Available online: https://www.oecd.org/education/ceri/35991871.pdf (accessed on 16 March 2023).
  8. Prosen, M.; Karnjuš, I.; Ličen, S. Evaluation of ELearning Experience among Health and Allied Health Professions Students during the COVID-19 Pandemic in Slovenia: An Instrument Development and Validation Study. Int. J. Environ. Res. Public. Health 2022, 19, 4777. [Google Scholar] [CrossRef]
  9. Schultz, R.B.; DeMers, M.N. Transitioning from Emergency Remote Learning to Deep Online Learning Experiences in Geography Education. J. Geogr. 2020, 119, 142–146. [Google Scholar] [CrossRef]
  10. MacDonald, C.; Backhaus, I.; Vanezi, E.; Yeratziotis, A.; Clendinneng, D.; Seriola, L.; Häkkinen, S.; Cassar, M.; Mettouris, C.; Papadopoulos, G.A. European Union Digital Education Quality Standard Framework and Companion Evaluation Toolkit. Open Learn. J. Open Distance E-Learn. 2021, 1–16. [Google Scholar] [CrossRef]
  11. Bates, A.W. Teaching in a Digital Age; Tony Bates Associates Ltd.: Vancouver, BC, Canada, 2015; ISBN 978-0-9952692-0-0. [Google Scholar]
  12. Kyriazos, T.A.; Stalikas, A. Applied Psychometrics: The Steps of Scale Development and Standardization Process. Psychology 2018, 9, 2531–2560. [Google Scholar] [CrossRef] [Green Version]
  13. MacDonald, C.J.; Archibald, D.; Trumpower, D.; Casimiro, L.; Cragg, B.; Jelley, W. Designing and Operationalizing a Toolkit of Bilingual Interprofessional Education Assessment Instruments. J. Res. Interprofessional Pract. Educ. 2010, 1, 304–316. [Google Scholar] [CrossRef]
  14. Kiraly, D. A Social Constructivist Approach to Translator Education: Empowerment from Theory to Practice, 1st ed.; Routledge: Manchester, UK; Northampton, MA, USA, 2018; ISBN 978-1-900650-33-5. [Google Scholar]
  15. Cleary, Y. Fostering Communities of Inquiry and Connectivism in Online Technical Communication Programs and Courses. J. Tech. Writ. Commun. 2021, 51, 11–30. [Google Scholar] [CrossRef]
  16. Fosnot, C.T. (Ed.) Constructivism: Theory, Perspectives, and Practice, 2nd ed.; Teachers College Press: New York, NY, USA, 2005; ISBN 978-0-8077-4570-0. [Google Scholar]
  17. Lindell, M.; Brandt, C.J. Assessing Interrater Agreement on the Job Relevance of a Test: A Comparison of CVI, T, r-Sub(WG(J)), and R*-Sub(WG(J)) Indexes. J. Appl. Psychol. 1999, 84, 640–647. [Google Scholar] [CrossRef]
  18. Polit, D.F.; Beck, C.T. Essentials of Nursing Research: Appraising Evidence for Nursing Practice, 9th ed.; Wolters Kluwer Health: Philadelphia, PA, USA, 2016. [Google Scholar]
  19. Basham, R.; Jordan, C.; Hoefer, R. Reliability and Validity in Qualitative Research. In The Handbook of Social Work Research Methods; Thyer, B.A., Ed.; Thousand Oaks Sage: Thousand Oaks, CA, USA, 2009; pp. 51–65. [Google Scholar]
  20. Field, A.P. Discovering Statistics Using IBM SPSS Statistics; SAGE Publications: London, UK; Thousand Oaks, CA, USA, 2018; ISBN 978-1-5264-1951-4. [Google Scholar]
  21. Mundfrom, D.J.; Shaw, D.G.; Ke, T.L. Minimum Sample Size Recommendations for Conducting Factor Analyses. Int. J. Test. 2005, 5, 159–168. [Google Scholar] [CrossRef]
  22. OECD. Innovating Education and Educating for Innovation: The Power of Digital Technologies and Skills; Organisation for Economic Co-operation and Development: Paris, France, 2016. [Google Scholar]
  23. Gapsalamov, A.R.; Akhmetshin, E.M.; Bochkareva, T.N.; Vasilev, V.L. “Digital Era”: Impact on the Economy and the Education System (Country Analysis). Utopía Prax. Latinoam. 2020, 25, 170–186. [Google Scholar]
  24. Arrogante, O.; González-Romero, G.M.; López-Torre, E.M.; Carrión-García, L.; Polo, A. Comparing Formative and Summative Simulation-Based Assessment in Undergraduate Nursing Students: Nursing Competency Acquisition and Clinical Simulation Satisfaction. BMC Nurs. 2021, 20, 92. [Google Scholar] [CrossRef] [PubMed]
  25. Ruiz-Jiménez, M.C.; Licerán-Gutiérrez, A.; Martínez-Jiménez, R. Why Do Student Perceptions of Academic Performance Improve? The Influence of Acquired Competences and Formative Assessment in a Flipped Classroom Environment. Act. Learn. High. Educ. 2022. [Google Scholar] [CrossRef]
  26. Sudakova, N.E.; Savina, T.N.; Masalimova, A.R.; Mikhaylovsky, M.N.; Karandeeva, L.G.; Zhdanov, S.P. Online Formative Assessment in Higher Education: Bibliometric Analysis. Educ. Sci. 2022, 12, 209. [Google Scholar] [CrossRef]
  27. Coman, C.; Țîru, L.G.; Meseșan-Schmitz, L.; Stanciu, C.; Bularca, M.C. Online Teaching and Learning in Higher Education during the Coronavirus Pandemic: Students’ Perspective. Sustainability 2020, 12, 10367. [Google Scholar] [CrossRef]
  28. Cheawjindakarn, B.; Suwannatthachote, P.; Theeraroungchaisri, A. Critical Success Factors for Online Distance Learning in Higher Education: A Review of the Literature. Creat. Educ. 2013, 3, 61–66. [Google Scholar] [CrossRef] [Green Version]
  29. Afify, M. E-Learning Content Design Standards Based on Interactive Digital Concepts Maps in Light of Meaningful Learning Theory and Constructivist Learning Theory. J. Technol. Sci. Educ. 2018, 8, 5–16. [Google Scholar] [CrossRef]
  30. Gisbert, M.; Bullen, M. Teaching and Learning in Digital World: Strategies and Issues in Higher Education; Publicacions Universitat Rovira I Virgili: Reus, Spain, 2015; ISBN 978-84-8424-376-2. [Google Scholar]
  31. Reilly, C.; Reeves, T.C. Refining Active Learning Design Principles through Design-Based Research. Act. Learn. High. Educ. 2022. [Google Scholar] [CrossRef]
  32. Opre, D.; Șerban, C.; Veșcan, A.; Iucu, R. Supporting Students’ Active Learning with a Computer Based Tool. Act. Learn. High. Educ. 2022. [Google Scholar] [CrossRef]
  33. Mou, T.-Y. Online Learning in the Time of the COVID-19 Crisis: Implications for the Self-Regulated Learning of University Design Students. Act. Learn. High. Educ. 2021. [Google Scholar] [CrossRef]
  34. Roddy, C.; Amiet, D.L.; Chung, J.; Holt, C.; Shaw, L.; McKenzie, S.; Garivaldis, F.; Lodge, J.M.; Mundy, M.E. Applying Best Practice Online Learning, Teaching, and Support to Intensive Online Environments: An Integrative Review. Front. Educ. 2017, 2, 59. [Google Scholar] [CrossRef] [Green Version]
  35. Costley, J.; Fanguy, M.; Lange, C.; Baldwin, M. The Effects of Video Lecture Viewing Strategies on Cognitive Load. J. Comput. High. Educ. 2021, 33, 19–38. [Google Scholar] [CrossRef] [Green Version]
  36. Ellis, R.A.; Bliuc, A.-M. Exploring New Elements of the Student Approaches to Learning Framework: The Role of Online Learning Technologies in Student Learning. Act. Learn. High. Educ. 2019, 20, 11–24. [Google Scholar] [CrossRef]
  37. Gisbert, M.; Cela-Ranilla, J.M. Advanced Technology Environments to Support the Teaching/Learning Process in the University. In Advanced Technology Environments to Support the Teaching/Learning Process in the University; Gisbert, M., Bullen, M., Eds.; Publicacions Universitat Rovira i Virgili: Reus, Spain, 2015. [Google Scholar]
  38. Tuma, F. The Use of Educational Technology for Interactive Teaching in Lectures. Ann. Med. Surg. 2021, 62, 231–235. [Google Scholar] [CrossRef]
  39. Wasfy, N.F.; Abouzeid, E.; Nasser, A.A.; Ahmed, S.A.; Youssry, I.; Hegazy, N.N.; Shehata, M.H.K.; Kamal, D.; Atwa, H. A Guide for Evaluation of Online Learning in Medical Education: A Qualitative Reflective Analysis. BMC Med. Educ. 2021, 21, 339. [Google Scholar] [CrossRef] [PubMed]
  40. de Leeuw, R.A.; Westerman, M.; Walsh, K.; Scheele, F. Development of an Instructional Design Evaluation Survey for Postgraduate Medical E-Learning: Content Validation Study. J. Med. Internet Res. 2019, 21, e13921. [Google Scholar] [CrossRef]
  41. Al-Fraihat, D.; Joy, M.; Sinclair, J. Identifying Success Factors for E-Learning in Higher Education. In Proceedings of the 12th International Conference on e-Learning (ICEL 2017), Belgrade, Serbia, 23–24 September 2021; ACPI: Orlando, FL, USA, 2017. [Google Scholar]
Figure 1. Scree plot of the summative assessment scale.
Figure 1. Scree plot of the summative assessment scale.
Sustainability 15 06361 g001
Table 1. Demographic and other characteristics of the participants (n = 185).
Table 1. Demographic and other characteristics of the participants (n = 185).
Characteristicn%
Gender
Male5328.6
Female13271.4
Country
Malta4021.6
Italy3619.5
Finland4021.6
Cyprus3820.5
Slovenia3116.8
The level of computer proficiency
Beginner a137
Average b7641.1
Advanced c7440
Expert d2211.9
Frequency of integrating computer technologies into teaching activities
Rarely42.1
Occasionally4122.2
Frequently4122.2
Almost Always6937.3
All the Time3016.2
Total amount of in-service trainings received to date on the use of computer technology in the classroom
None6334.1
A full day or less6133
More than a full day and less than a one-semester course3317.7
A one-semester course1910.3
More than a one-semester course94.9
Note: a the ability to perform basic functions in a limited number of computer applications; b demonstration of general competence in a range of computer applications; c the ability to use a wide range of computer technologies competently; d extreme competence in the use of a wide range of computer technologies.
Table 2. Temperature check (formative assessment scale), descriptive statistics, and content validity index (n = 185).
Table 2. Temperature check (formative assessment scale), descriptive statistics, and content validity index (n = 185).
ItemsMean Score (SD)I-CVI (R)
I am enjoying the course.4.30 (0.672)1.00
The course aligns with my learning expectations.4.35 (0.635)1.00
The expected weekly learning outcomes are clear.4.62 (0.519)1.00
The content presented is practical.4.58 (0.603)1.00
I will use the content of this course to improve my teaching.4.64 (0.545)1.00
The resource materials are helpful and relevant (RoadMap, eDocs, eSamples, reading materials).4.63 (0.485)1.00
The learning activities are practical.4.54 (0.580)1.00
The learning activities will help me feel more confident in designing online sessions in the future.4.38 (0.744)1.00
The learning activities will help me feel more comfortable in online teaching in the future.4.40 (0.669)1.00
I feel supported in my learning in this course.4.61 (0.542)0.86
The online course is professionally presented.4.74 (0.442)1.00
The online course is well organised and consists of meaningful segments that built on previous information.4.67 (0.471)1.00
I understand what I am expected to read, do, and discuss each week.4.51 (0.522)1.00
My questions are answered in a timely fashion.4.75 (0.449)1.00
I receive valuable feedback on my learning tasks.4.51 (0.700)1.00
I feel comfortable in the learning community.4.11 (0.670)1.00
The learning community created in this course facilitates my learning.4.00 (0.787)1.00
I will apply what I learn in this course to design online learning in my work setting.4.46 (0.500)1.00
I will apply what I learn in this course to teach online learning in my work setting.4.33 (0.638)1.00
I will apply what I learn in this course to evaluate online learning in my work setting.4.17 (0.738)1.00
This course has provided me with the knowledge and skills I can use to improve both my face-to-face and online teaching.4.38 (0.633)1.00
The knowledge and skills obtained in this course will help me better serve my students.4.60 (0.554)1.00
Note: Scoring is based on 5 possible responses, ranging from 1—strongly disagree to 5—strongly agree; SD—Standard Deviation; I-CVI (R, C)—Items Content Validity Index (Relevance)—While all experts rated the items as very clear, 1 item was found to be somewhat relevant by the experts within I-CVI (relevance).
Table 3. Summative assessment scale—Confirmatory factor analysis yields a three-factor solution with factor loadings (n = 185).
Table 3. Summative assessment scale—Confirmatory factor analysis yields a three-factor solution with factor loadings (n = 185).
ItemsFactor LoadingsMean Score (SD)I-CVI (R)
Factor 1The learning experience implemented quality standards suitable for university continuing education.0.8414.41 (0.713)1.00
The learning experience adhered to ethical pedagogical, internet, and privacy responsibilities.0.7904.51 (0.576)1.00
The learning experience addressed my learning needs.0.7764.39 (0.750)1.00
The learning experience included institutional support (IT department, library, registration, etc.).0.7744.33 (0.763)1.00
The learning experience respected my current knowledge and experience.0.7744.37 (0.732)1.00
The content included current best practices.0.7724.42 (0.679)0.86
The learning experience utilised a freely accessible learning platform.0.7654.51 (0.810)1.00
As a result of my participation in the nine-module course, I am able to design assignments to effectively assess learners’ knowledge in an online environment.0.7554.50 (0.653)1.00
The learning experience sustained my interest.0.7534.28 (0.726)0.86
The course offered opportunities for engagement in activities.0.7524.39 (0.630)1.00
The course used appropriate technological applications.0.7424.41 (0.713)1.00
The content covered the teaching skills required for implementing online learning.0.7374.52 (0.609)1.00
As a result of my participation, I am able to prepare online courses (organise their content, create learning activities, set up a discussion forum, etc.).0.7294.64 (0.561)1.00
The course provided strategies for learner support.0.7284.49 (0.606)0.86
The course was relevant and authentic.0.7184.57 (0.614)1.00
The course was organised into meaningful sections which built on previous information.0.6954.49 (0.677)1.00
I will apply what I have learned in this course to design, teach, and evaluate online learning in my work setting.0.6714.56 (0.650)1.00
Factor 2The course facilitators created an online community built on respect, listening, and responding to feedback.0.8514.52 (0.678)1.00
Learning occurred through discussion, reflection, and collaboration.0.8054.29 (0.863)1.00
The content included an appropriate number of learning activities.0.7944.41 (0.718)1.00
In the course, my opinions were taken into account and respected.0.7884.43 (0.748)1.00
The course facilitators were empathetic to my needs.0.7734.62 (0.613)0.86
Detailed and timely feedback was provided on course assignments.0.7654.40 (0.721)1.00
The content was well organised.0.7604.54 (0.617)1.00
The resources were helpful and relevant (instructional materials, Read It!, Apply It!, Discuss It!, etc.).0.7414.48 (0.690)1.00
The learning experience facilitated scalability of resources (reuse of resources and aspects of the course).0.6884.27 (0.776)1.00
The course included collaborative learning opportunities.0.5764.19 (0.717)0.86
Factor 3Information in the online learning environment was kept up to date.0.8314.53 (0.681)1.00
The course facilitators were present and responsive in the online environment (reading and responding to posts and questions).0.8284.51 (0.639)1.00
The course facilitators were knowledgeable.0.8194.75 (0.504)0.86
The course was aligned with the learning outcomes.0.7864.56 (0.713)1.00
The content was appropriate for my level of knowledge.0.7744.50 (0.627)1.00
The course was professionally presented.0.6494.62 (0.598)1.00
Note: Accumulated total explained variance = 62%. Bartlett’s Test of Sphericity: χ2 = 6480.274, p < 0.0001; Kaiser–Meyer–Olkin value = 0.807; SD—Standard Deviation; Factor 1—Content and Structure: Perspectives on Learning Experience and Outcomes, Factor 2—Established Online Community and Support, Factor 3—Delivery: An Overall Perspective; Factor Rotation: Promax with Kaiser normalisation; Rating is based on 5 response categories, ranging from 1—strongly disagree to 5—strongly agree; I-CVI (R, C)—Items Content Validity Index (Relevancy). While all experts rated the items as very clear, six items were rated as somewhat relevant by the experts within I-CVI (relevance).
Table 4. Correlation matrix for the factors of the summative assessment scale.
Table 4. Correlation matrix for the factors of the summative assessment scale.
Factors/Subscales123
Content and Structure: Perspective on Learning Experience and Outcomes-0.830 **0.803 **
Established Online Community and Support0.830 **-0.750 **
Delivery: An Overall Perspective0.803 **0.750 **-
**—Correlation is significant at the 0.01 feature level.
Table 5. Reliability test and descriptive statistics of the summative assessment scale and its three factors.
Table 5. Reliability test and descriptive statistics of the summative assessment scale and its three factors.
Factors/SubscalesnMdnSDCronbach α Coefficient95% CIp Value
LowerUpper
Content and Structure: Perspective on Learning Experience and Outcomes174.590.5650.9534.374.53<0.001
Established Online Community and Support104.600.5780.9204.334.50<0.001
Delivery: An Overall Perspective64.670.5480.8894.494.65<0.001
Summative Assessment Scale334.610.5330.9694.384.54<0.001
Note: Mdn—Median; SD—Standard Deviation; CI—Confidence Interval.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ličen, S.; Cassar, M.; Filomeno, L.; Yeratziotis, A.; Prosen, M. Development and Validation of an Evaluation Toolkit to Appraise eLearning Courses in Higher Education: A Pilot Study. Sustainability 2023, 15, 6361. https://doi.org/10.3390/su15086361

AMA Style

Ličen S, Cassar M, Filomeno L, Yeratziotis A, Prosen M. Development and Validation of an Evaluation Toolkit to Appraise eLearning Courses in Higher Education: A Pilot Study. Sustainability. 2023; 15(8):6361. https://doi.org/10.3390/su15086361

Chicago/Turabian Style

Ličen, Sabina, Maria Cassar, Lucia Filomeno, Alexandros Yeratziotis, and Mirko Prosen. 2023. "Development and Validation of an Evaluation Toolkit to Appraise eLearning Courses in Higher Education: A Pilot Study" Sustainability 15, no. 8: 6361. https://doi.org/10.3390/su15086361

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop