Next Article in Journal
Advanced Sensors and Sensing Technologies for Indoor Localization
Next Article in Special Issue
A Methodological Framework to Predict Future Market Needs for Sustainable Skills Management Using AI and Big Data Technologies
Previous Article in Journal
Tailoring mHealth Apps on Users to Support Behavior Change Interventions: Conceptual and Computational Considerations
Previous Article in Special Issue
Implementation of the Modern Immersive Learning Model CPLM
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

From Teachers’ Perspective: Can an Online Digital Competence Certification System Be Successfully Implemented in Schools?

Faculty of Organization and Informatics, University of Zagreb, 42000 Varazdin, Croatia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(8), 3785; https://doi.org/10.3390/app12083785
Submission received: 17 March 2022 / Revised: 31 March 2022 / Accepted: 7 April 2022 / Published: 8 April 2022
(This article belongs to the Special Issue Smart Education Systems Supported by ICT and AI)

Abstract

:
This study aims to assess the implementation effectiveness of the online platform for digital competence (DC) certification in schools. The testing platform was a prototype of a DC certification system developed and piloted during 2019 in primary and secondary schools in six European countries involving more than 800 teachers and 6000 students. The study resulted in positive proof that the effective integration and evaluation of the DC acquisition, evaluation, and certification within formal curricula in primary and secondary schools is possible. In addition, it was confirmed that information quality is a significant predictor of the impact on the platform end-users. In contrast, the quality of service is not a significant predictor of a successful implementation of the cloud-based platform with an intuitive user interface and proper online help, i.e., massive open online courses (MOOCs). Furthermore, the developed instrument can help schools implement and assess platforms for DC certification and help policymakers pursue and monitor the implementation of such platforms in schools.

1. Introduction

The European Commission recognized the relevance of digital competence (DC) among citizens by releasing many publications (e.g., DigCompEdu, DigCompOrg, DigCompConsumers) and aiming to bring a digital transformation to society while focusing on five areas of DC [1]: (1) information and data literacy, (2) communication and collaboration, (3) digital content creation, (4) safety, and (5) problem-solving. In a broader sense, DC “involves the confident, critical and responsible use of, and engagement with, digital technologies for learning, at work, and for participation in society” [2].
The DC acquisition and assessment should be started early in primary schools [3,4,5]. Furthermore, it should be integrated into the formal educational curriculum [6,7]. This approach would enable schools to identify a lack of specific DCs and introduce a plan for their implementation. More studies are found on the assessment of DC in higher education [8] than in primary and secondary education [9]. In general, validated assessments are essential in primary and secondary education because they help students improve their learning, develop necessary abilities, and measure their progress [10]. A three-year longitudinal study [11] further contributes to that matter by questioning whether the application of designed DC teaching can accelerate the natural course of DC development.
Therefore, it is necessary to study the effectiveness of the implementation of DC development systems that could enable young students to acquire the desired set of DCs. In this respect, new cloud-based teaching methodologies and services that enable DC acquisition in primary and secondary schools in Europe (hereinafter referred to as the CRISS platform) were developed. The CRISS platform was piloted from 2017 until 2019 as a part of the Horizon 2020 project. This platform aims to deliver user-driven and adaptive technological solutions that allow guided acquisition, evaluation, and certification of DC in primary and secondary education. It was used both by students and teachers.
This study aims to assess the implementation effectiveness of the CRISS platform as an online DC certification system in primary and secondary schools following the well-known DeLone and McLean [12,13] Information System Success Model (D&M Model). In that sense, the following research questions (RQs) are prompted:
  • RQ1: What factors impact teachers’ perspectives on the successful implementation of an online DC certification system in primary and secondary schools?
  • RQ2: What are the relationships between those factors?

2. Literature Review

This literature review followed the recommendations from [14] to thoroughly investigate the current state of DC in education concerning their measurement and assessment supported by ICT. All studies satisfied one of the set criteria:
  • Indicate a lack of DC in children;
  • Describe systems for assessment and certification of DC;
  • Describe the acquisition of DC through the curriculum.
First, it is interesting to report that various studies dealt with teachers’ competences [4,15,16,17,18,19,20] and the ways they adopted the technology in their classes [21,22]. One of the possible reasons is that teachers are considered responsible for students’ DC assessment and acquisition at all levels of education [23]. In contrast, numerous studies discussed the lack of and need for developing DC among university students [8,23,24]. A smaller number of studies included primary and secondary students [25,26], as confirmed by recent research [11,27]. Several authors [3,4,5,9] suggested that the acquisition and assessment of DC should start early in primary schools as a part of the formal curriculum.
Our literature review revealed only a few examples of tools for competence development or assessment (e.g., [28,29]). Identified tools are implemented as a part of formal education, employment training, or life-long learning for citizens. Neither of those tools was assessed for its exploitation and successful implementation to the best of our knowledge. However, their further development is expected in the future [30]. From a theoretical point of view, the findings from the authors of [31] are very substantial. They analyzed 32 different frameworks for 21st-century competences and indicated that competences must be operationally defined and embedded within and across core subjects to facilitate competence implementation in schools. However, in terms of interdisciplinarity, “… intentions and practice seemed still far apart” [31] (p. 299).
It can be concluded that there is a lack of assessment instruments and tools for education based on competence [32]. According to gathered information [33], there is a need for such instruments to optimize students’ learning and inform them about their progress. Moreover, ref. [34] recognized that it is necessary to develop the assessment criteria for each competence based on which students’ progress at an individual or group level could be tracked. Based on the described literature review, we could conclude the following:
  • A modest number of studies deal with the DC acquisition and certification in primary and secondary schools in a way of only indicating its importance without providing an implementation solution.
  • None of the studies tried to assess the implementation of the systems for DC acquisition.
  • There seems to be a need to start with DC education and assessment from the earliest age and integrate it into the formal curriculum.

3. Research Aims

Following conclusions from the previous section, this study further investigates the field of tools for DC assessment by assessing the effectiveness of an online system for DC assessment and certification for students in primary and secondary schools from teachers’ perspectives. Since recent findings [23,35,36] suggest that teachers should be responsible for incorporating DC assessment and acquisition into schools, we focused our work on their perceptions of DC acquisition and assessment in classes. Teachers need to understand how ICT supports the learning–teaching environment to advance education [37]. Furthermore, teachers are the ones who will have to adapt their teaching practices and materials or apply certain technology according to new competency-based curricula. Therefore, within the context of this study, it is crucial to analyze the teachers’ perceptions of the system that supports their work and contributes to the development of students’ DC. The following research objectives (ROs) are defined to answer the research questions from the first section:
  • RO1. To propose a DC certification system success model;
  • RO2. To develop and validate a survey instrument that can empirically test and theorize the model;
  • RO3. To examine the relationships among the variables and their relative impact on DC certification system success.

4. Research Context

The appropriate research context was set up to answer the main research questions and test the hypotheses proposed in the previous sections. The CRISS platform was provided within the Horizon 2020 project to selected primary and secondary schools in six European countries—Croatia, Greece, Italy, Romania, Spain, and Sweden. In each country, a project partner monitored the platform’s whole process of use by teachers and students for several months. For ease of use, the interface of the CRISS platform was translated into target languages as well as scenarios, tasks, and activities. The latter were also adapted to fit the country-specific context and different educational levels (primary or secondary).
The CRISS platform is a cloud-based platform consisting of teaching methodologies and services that enable the acquisition of DC in schools. It is based on the validated CRISS DC Framework [38] aligned with the European DigComp. It also follows the “integration pedagogy” concept introduced by [39] as a valid approach for developing the competence assessment that focuses on learning (mastering) DC. In the CRISS DC Framework, DCs are divided into 12 sub-competences and form five areas. Each student should produce certain evidence according to a defined set of performance criteria to achieve an individual sub-competence. Moreover, each performance criterion consists of indicators that provide measurements or conditions required to analyze the evidence regarding performance criterion and competence attainment.
Teachers can plan students’ learning within the CRISS platform by choosing activities and tasks from the repository and applying them according to their teaching practice. They also evaluate students’ activities and tasks performed within the platform and provide them with feedback. Each successfully evaluated activity brings students closer to attaining a certain sub-competence. An overview of a CRISS DC certification process integrated into the CRISS platform is presented in Figure A1 of Appendix A.
DC can be evaluated through human or technological interventions on the CRISS platform. Human interventions are performed by teachers and students using various adaptable tools (e.g., checklists, rubrics, scales) implemented within the platform. These tools are designed to be easily used by the teacher and for students’ self and/or peer evaluation. Furthermore, the CRISS platform automatically performs a technological intervention by tracking students’ activities, based on which it collects relevant information.

5. Research Model and Methodology

To assess and identify the most relevant factors for effective implementation of the CRISS platform, we used the D&M IS Success Model [12,13]. It is one of the most cited models [40]. It serves as a reference point for many other models that tried to encompass the information system (IS) success or effectiveness (e.g., [41,42,43]).
The D&M IS Success Model and the IS theory can be applied here because the CRISS platform is an information system. The platform utilizes processes and procedures for DC acquisition, evaluation, and certification as defined in the validated CRISS DC Framework [38]. Moreover, it involves people (teachers and students), equipment, and other related software for online learning and collaboration.
The authors of the D&M Model identified six components or constructs of IS success: System Quality, Information Quality, Service Quality, System Use, User Satisfaction, and Net Impacts. Each of them is described very briefly in the next section by paraphrasing the original authors [12,13]. System Quality measures the desirable technical characteristics of IS. Since this dimension captures the system itself, it is oriented toward technical specifications such as data processing capabilities, response time, ease of use, system reliability, and sophistication. Information Quality includes the desirable characteristics of system output in the form of information such as its relevance, understandability, accuracy, completeness, usability, and importance. Service Quality measures the quality of support that system users receive from the IS department and the IT support personnel. System Use indicates the degree and manner in which staff and customers utilize the capabilities of an IS. At the same time, User Satisfaction measures users’ satisfaction with reports, platforms, and support services. Finally, the extent to which the IS contributes to the success of individuals, groups, and other stakeholders is represented as Net Impacts. It measures the system’s outcomes and is inevitably compared to its purpose. For this reason, the Net Impacts construct “will be the most contextual dependent and varied of the six D&M Model success dimensions” [13] (p. 59).
The authors of the D&M Model suggest that constructs and related measures should be systematically selected. This should be done considering contextual contingencies (such as the organization’s size or structure, technology, and individual characteristics of the system) to develop a comprehensive measurement model and instrument for a particular context. Therefore, to detect factors that influence the successful implementation of a DC system (RQ1) and analyze their relationships (RQ2), we proposed the research model with hypotheses as indicated in the D&M Model [13] (see Figure 1):
Hypothesis 1 (H1).
The quality of the CRISS platform positively affects its use.
Hypothesis 2 (H2).
The quality of the CRISS platform positively affects user satisfaction with it.
Hypothesis 3 (H3).
The information quality produced by the CRISS platform positively affects its use.
Hypothesis 4 (H4).
The information quality produced by the CRISS platform positively affects user satisfaction with it.
Hypothesis 5 (H5).
The service quality positively affects the use of the CRISS platform.
Hypothesis 6 (H6).
The service quality positively affects user satisfaction with the CRISS platform.
Hypothesis 7 (H7).
The use of the CRISS platform positively affects its net impacts.
Hypothesis 8 (H8).
The use of the CRISS platform positively affects user satisfaction.
Hypothesis 9 (H9).
User satisfaction positively affects the net impacts of the CRISS platform.
This study adopted a quantitative methodology. Data were collected via survey and administered to primary and secondary teachers to achieve previously proposed research aims. The research methodology followed the typical procedures [44] for measurement instrument development, data collection, and analysis and was conducted as follows: (1) measurement instrument development; (2) sample and procedure; (3) measurement model assessment; and (4) structural model testing. Next, we describe the research context and then elaborate on all phases of the research methodology.

6. Measurement Instrument Development

A complete process of measurement instrument development is shown in Figure 2. As noted in the previous sections, analysis of a successful implementation of DC systems, such as the CRISS platform, has not yet been recorded. Therefore, we designed the instrument from the ground up using the constructs from the D&M Model. In doing so, we relied on the literature review and focus groups of experts, as suggested by [45,46,47]. Experts were engaged based on their expertise in e-learning, pedagogy, teaching methodology, and assessment activities. After establishing content validity and prior construct validity, the survey instrument was translated into six target languages and implemented in LimeSurvey, a free, open-source tool for creating online surveys.
The final instrument ready for the field test consisted of 48 items: 7 items in System Quality, 7 items in Information Quality, 6 items in Service Quality, 10 items in System Use, 5 items in User Satisfaction, and 13 items in Net Impacts. The reliability and validity of the final instrument were tested with a sample of 298 teachers.

7. Sample and Procedure

The sample was drawn from 145 schools according to the guidelines established within the project, which aimed to ensure extensive participation and an equally high completion rate of activities on the platform. Teachers at selected primary and secondary schools had to actively participate in the project and use the platform for more than three months to be included in the research. In total, 1102 teachers were contacted about the study’s purpose and were sent the link to an online survey.
The data were collected at the end of the school year 2018/2019, between May and September 2019. It was four months after the CRISS platform was introduced to selected schools. As stated in the previous section, the assessment of the platform was performed by 400 teachers who participated voluntarily. However, the obtained data were carefully analyzed; outliers were removed and 298 complete answers were left for further processing in R [48]. The overall response rate was 36.3% before data exclusions and 27.04% after. Both rates align with the findings that showed that the average response rate in online surveys ranges between 20 and 47% [49].
The demographic characteristics of the sample are shown in Table 1. A sample size of 298 teachers reached a subject-to-item ratio of 6:1. Therefore, it was decided to apply PLS-SEM through SmartPLS software version 3 (SmartPLS GmbH, Boenningstedt, Germany) [50] in further work, which proved very robust for smaller samples [51].

8. Results

The evaluation of measurement and structural models was performed using the variance-based SEM. The narrow focus was on examining the relationships among latent variables and their indicators. First, the measurement model was examined for reliability and validity. After, the fit of the structural model was tested, and the significance of path coefficients was determined.

8.1. Measurement Model Assessment

An indicator had to load over 0.6 into its posited construct to be retained and to ensure good convergent validity [45]. Eight indicators (SQ6, SQ7, SVQ5, SVQ6, SU1, SU2, SU3, and SU4) were removed, considering the previously defined rule-of-thumb across three constructs. One exception was made with the indicator SVQ4, which was retained for theoretical reasons, and it did not impact the internal consistency reliability to a greater extent. Items that did not converge on the predicted construct or had cross-loads on the two constructs were removed from the reflective measurement model. Those items were SQ5, IQ2, NI1, and NI2.
The assessment of the reflective measurement model is shown in Table 2. Indicator loadings were higher than already mentioned 0.60, which was a criterion of a good measurement of the latent variables. All Cronbach’s alpha (CA) and composite reliability (CR) values were higher than 0.70 [45,51], which showed satisfactory internal consistency reliability. The average variance extracted (AVE) values were also greater than 0.50, indicating good convergent validity [51]. After removing the indicators whose cross-loadings were higher than the outer loadings on prospective constructs, the HTMT statistics indicated that discriminant validity was established (see Table 3). This was additionally examined by conducting the bootstrapping procedure. There were no HTMT confidence intervals (97.5%) found containing the value of one, indicating the lack of discriminant validity [51].
After omitting the mentioned indicators, the analysis showed that construct measures had adequate convergent and discriminant validity and showed good reliability. The final instrument with 36 remaining indicators is presented in Table A1 of Appendix B.

8.2. Structural Model Testing

Structural model testing started with examining a set of predictor constructs for the variance inflation factor (VIF). All values were below the cut-off value of 5, which means no collinearity issues were found [51]. Next, we calculated the coefficient of determination for endogenous variables (R2) and their predictive relevance (Q2). Results are shown in Table 4.
The evaluation of goodness-of-fit indices for the structural model was performed in R since SmartPLS software provides less detailed data. All constructs showed satisfactory fit since all indices were in the desired range (see Table 5) [52]. These results suggested that this research model can confirm and explain the teachers’ perception of the CRISS platform.
As shown in Table 6, the path coefficient estimates for the hypothesized relationships ranged from 0.10 to 0.70. They were all significant at a 1% significance level except in two cases [51]. The path between Service Quality and User Satisfaction was significant at a 5% significance level. The path between Service Quality and System Use was nonsignificant; therefore, this hypothesis was rejected. The measured f2 values of the relationships between the constructs ranged from 0.03 to 0.10 or from 0.15 to 0.21 or were equal to 0.90, indicating low, medium, and large effect sizes, respectively [53]. The relationship of Service Quality to System Use was not supported in line with H5.
Figure 3 shows the revised research model, i.e., relationships between the constructs based on supported hypotheses.

8.3. Importance–Performance Map Analysis (IPMA)

The importance–performance map analysis (IPMA) was applied as additional support to standard results reported in PLS-SEM and to bring more clarity to the impact of exogenous constructs on the target construct in this model, Net Impacts [51]. In a practical sense, we detected the areas that have proven to be highly important but relatively underperformed on the CRISS platform. The indirect result of the IPMA is a priority list of potential improvements in the platform. In statistical terms, we discovered the total effects (importance) and average latent variables scores (performance) of the Net Impacts predecessors [51]. Since all indicators were measured with the same 5-point Likert scale, no corrections were needed during the analysis conducted in SmartPLS software version 3 [50]. The summary of the calculated IPMA data for Net Impacts is shown in Table 7.
User Satisfaction had the strongest total effect over the target Net Impacts, followed by Information Quality, System Use, System Quality, and Service Quality. The lowest performance value for the Net Impacts was exhibited by System Use, then System Quality, User Satisfaction, Information Quality, and Service Quality. The values of predecessor constructs were plotted in an importance–performance map divided into four quadrants (see Figure 4). The cut-off values for the quadrants and interpretation were determined according to [54]. A cut-off value of 0.35 for the x-axis was a mean of importance scores. The value of 50 was used for performance (y-axis) as a midpoint of the 0–100 range.
Information Quality and User Satisfaction fell under the first quadrant, which corresponds to “keep up the good work”, meaning they both have high importance and performance levels. There were no constructs targeted in the second quadrant. These would be the constructs respondents considered essential and on which improvement should concentrate. On the borderline between the second and third quadrant, System Use should be targeted first with undertaken managerial actions. The second area to improve would be System Quality, which belongs to the third quadrant and is characterized as “low priority”. However, since the second quadrant contained no construct, excess resources should be allocated to the third quadrant [55]. The fourth quadrant indicated “possible overkill” and included Service Quality. It means respondents mainly agreed that they had received proper support from responsible persons for using the CRISS platform. Still, they did not consider it very important for the Net Impacts.

9. Discussion

This study tackled the effectiveness of the online platform for acquisition, evaluation, and certification of DC in schools concentrating on teachers’ attitudes. It provided a follow-up to previous work done by scholars [3,4,5,6,7]. They indicated it is needed to integrate DC acquisition and evaluation into the formal education curriculum and thus start with the process very early in schools. This study also substantially contributes to the field since it explores the possibilities for implementing such systems in schools. It was found that DC assessment in higher education is more studied [8] than that in primary and secondary education [9]. To the best of our knowledge, none of the previous studies examined the effectiveness of tools for DC assessment in a formal curriculum, e.g., [28,29,56].
This study’s findings confirmed that the overall D&M Model can measure the successful implementation of online DC certification systems in primary and secondary schools, which is related to the first research question (RQ1). Our research model showed that Information Quality, besides System Quality, is a significant predictor of the actual use of the CRISS platform, thus leading to greater satisfaction with the platform. This is in line with findings that also confirmed a strong connection between Information Quality and Net Impacts [13]. Overall, the model showed valid psychometric properties and acceptable goodness-of-fit. We can conclude that the research model can effectively measure the success of the CRISS platform.
For the second research question (RQ2), this study identified and analyzed relationships between the constructs of successful implementation of the CRISS platform. Specifically, it is interesting to conclude that User Satisfaction has the largest effect on perceived Net Impacts. Information Quality has stronger effects on System Use and User Satisfaction than the quality of the platform itself. The IPMA has also confirmed this impact, identifying Information Quality and User Satisfaction as the constructs with the highest importance for the CRISS platform’s overall effects (Net Impacts).
This study did not reveal a significant relationship between Service Quality and the actual usage of the CRISS platform (System Use). It only detected a weak effect that the quality of service has on User Satisfaction. This could be due to two sound reasons. The first reason is probably a massive open online course (MOOC) that was created instead of an instruction manual to help teachers and students use the platform. The second reason is that, during the project, helpdesk officers often called schools and communicated with teachers to identify problems, motivate them to use the platform, and boost their self-confidence. Moreover, one of the project’s aims was to have a sustainable platform that teachers could use without requiring conventional IT service support. Furthermore, the IPMA analysis indicated that the online help (such as MOOCs), intuitive interface, and platform’s ease of use could be adequate means of support for teachers.

10. Limitations

The current research was focused solely on the teachers’ perspective of using the CRISS platform and the overall concept of DC acquisition, evaluation, and certification. However, the measurement instrument was designed in a general way enabling its application to any system that deals with DC evaluation or certification. With slight modifications, it can be adapted to different target audiences. Although teachers are the primary users of the platform, students are also affected by the proposed changes and are actively using the CRISS platform. In that respect, the authors of this study have already started working on adjusting the CRISS success instrument for students to assess their perception of the newly introduced CRISS concept.
In addition, since the number of teachers has not reached an optimal number to carry out the model analysis on a country level, the responses could be culturally conditioned. Therefore, this study should be developed and carried out in each country separately to adapt to the socio-cultural aspects.

11. Conclusions

Conceptually, this study extends the utilization of the D&M Model to the setting of DC evaluation and certification from the teachers’ point of view. It considerably contributes to the field of education by showing it is possible to effectively implement DC evaluation and certification within the compulsory curriculum in schools, thus starting with DC education from the earliest age. Additionally, it reveals that the quality of service support is not vital for the successful implementation of such a platform, as long as it is easy to use and supported by online instructions (e.g., MOOCs).
To the best of found knowledge, the CRISS platform is the first endeavor to deliver a complete, cloud-based solution for the acquisition, evaluation, and certification of DC in Europe through a formal school curriculum. Considering the found study [31], which summarizes theoretical recommendations from a dozen frameworks dealing with 21st-century competences, this study presents one step forward.
Although the measurement instrument was applied to the CRISS platform, it can be generalized and applied to other similar platforms. Henceforth, schools can use the measurement instrument developed in this study to assess the need for improving their systems for certification of DC or elements that impact the effectiveness of such systems.

Author Contributions

Conceptualization and methodology, I.B.; literature review, A.S.; data collection, I.B. and A.S.; data analysis, A.S.; interpretation of results, I.B. and A.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was carried out as a part of the ‘Demonstration of a scalable and cost-effective cloud-based digital learning infrastructure through the Certification of digital competences in primary and secondary schools’ (CRISS) project funded by the European Union’s Horizon 2020 research and innovation program under Grant Agreement No. 732489.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to GDPR.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. An overview of a CRISS DC certification process.
Figure A1. An overview of a CRISS DC certification process.
Applsci 12 03785 g0a1

Appendix B

Table A1. Final CRISS success instrument.
Table A1. Final CRISS success instrument.
System Quality
SQ1: The CRISS platform is easy to use.
SQ2: The CRISS platform is available whenever I need to use it.
SQ3: The CRISS platform runs fast.
SQ4: The CRISS platform has all the features necessary to accomplish my tasks (e.g., create and share content, work with others, import data from other tools).
Information Quality
IQ1: The information I find on the CRISS platform is useful to perform my activities.
IQ3: I can easily find the information I need on the CRISS platform.
IQ4: The information I find on the CRISS platform is complete.
IQ5: The information I find on the CRISS platform is accurate.
IQ6: The presentation of the assessment results is easy to understand.
IQ7: The information about the students’ progress is clear.
Service Quality
SVQ1: Helpdesk is available to help me use the CRISS platform.
SVQ2: Helpdesk responds promptly when I have a problem with the CRISS platform.
SVQ3: Helpdesk provides a useful response when I have a problem with the CRISS platform.
SVQ4: Other forms of online help are available (e.g., chat, social networks) for using the CRISS platform.
System Use
SU5: I use the CRISS platform to collaborate with my colleagues (e.g., creation of CAS, assessments).
SU6: I use the CRISS platform to communicate/give feedback with/to my students.
SU7: I use the CRISS platform to tag my content (e.g., CAS).
SU8: I use the CRISS platform to track the progress and achievements of my students.
SU9: I use the CRISS platform to provide additional CAS or activities to students based on their assessment results.
SU10: I use the CRISS platform to integrate the content or activities from external tools (e.g., YouTube, Facebook, Flickr, Google Drive).
User Satisfaction
US1: I feel comfortable using the CRISS platform.
US2: I find the CRISS platform useful for additional assessment of my students.
US3: I think it is worthwhile to use the CRISS platform.
US4: I feel confident using the CRISS platform.
US5: I am satisfied with the CRISS platform possibilities.
Net Impacts
NI3: The CRISS platform helps me to improve the engagement of my students.
NI4: The CRISS platform enables me to provide clear evaluation criteria to my students.
NI5: I am able to provide better feedback to my students through the CRISS platform.
NI6: I am able to provide timely feedback to my students.
NI7: The CRISS platform extends my capacity for assessment.
NI8: The CRISS platform saves me time by supporting my teaching activities (planning process, guiding students, assigning tasks, monitoring students’ activities, etc.).
NI9: The CRISS platform allows me to track the progress of my students much better than I could do without CRISS platform.
NI10: I am able to detect underperforming students more quickly than I would without the CRISS platform.
NI11: The CRISS platform helps me to make more suitable decisions to enable students’ progress.
NI12: The CRISS platform enables me to propose tasks that allow students to be creative in solving them (ingenious, original).
NI13: The CRISS platform enables me to track my students’ reasoning when solving the tasks.
Note. Answers on 1–5-point Likert-type scale (1—strongly disagree; 2—disagree; 3—uncertain; 4—agree; 5—strongly agree; NA—not applicable).

References

  1. Vuorikari, R.; Punie, Y.; Carretero, S.; Brande, L.V.D. DigComp 2.0: The Digital Competence Framework for Citizens. Update Phase 1: The Conceptual Reference Model; Publication Office of the European Union: Luxembourg, 2016; ISBN 978-92-79-58876-1. [Google Scholar]
  2. Council of the European Union. Council Recommendation of 22 May 2018 on Key Competences for Lifelong Learning. Off. J. Eur. Union 2018, 1–13. Available online: https://eur-lex.europa.eu/ (accessed on 1 December 2021).
  3. Siddiq, F.; Gochyyev, P.; Wilson, M. Learning in Digital Networks—ICT Literacy: A Novel Assessment of Students’ 21st Century Skills. Comput. Educ. 2017, 109, 11–37. [Google Scholar] [CrossRef] [Green Version]
  4. Martín, S.C.; González, M.C.; Peñalvo, F.J.G. Digital Competence of Early Childhood Education Teachers: Attitude, Knowledge and Use of ICT. Eur. J. Teach. Educ. 2019, 43, 210–223. [Google Scholar] [CrossRef]
  5. Zabotkina, V.; Korovkina, M.; Sudakova, O. Competence-Based Approach to a Module Design for the Master Degree Programme in Translation: Challenge of Tuning Russia Tempus Project. Tuning J. High. Educ. 2019, 7, 67–92. [Google Scholar] [CrossRef]
  6. Tudor, S.L. The Open Resurces and Their Influences on the Formation of Specific Competencies for the Teaching Profession. In Proceedings of the 10th International Conference on Electronics, Computers and Artificial Intelligence (ECAI 2018), Iasi, Romania, 28–30 June 2018; pp. 1–4. [Google Scholar]
  7. Varela, C.; Rebollar, C.; García, O.; Bravo, E.; Bilbao, J. Skills in Computational Thinking of Engineering Students of the First School Year. Heliyon 2019, 5, 1–9. [Google Scholar] [CrossRef]
  8. Sillat, L.H.; Tammets, K.; Laanpere, M. Digital Competence Assessment Methods in Higher Education: A Systematic Literature Review. Educ. Sci. 2021, 11, 402. [Google Scholar] [CrossRef]
  9. Siddiq, F.; Hatlevik, O.E.; Olsen, R.V.; Throndsen, I.; Scherer, R. Taking a Future Perspective by Learning from the Past—A Systematic Review of Assessment Instruments That Aim to Measure Primary and Secondary School Students’ ICT Literacy. Educ. Res. Rev. 2016, 19, 58–84. [Google Scholar] [CrossRef] [Green Version]
  10. Cutumisu, M.; Adams, C.; Lu, C. A Scoping Review of Empirical Research on Recent Computational Thinking Assessments. J. Sci. Educ. Technol. 2019, 28, 651–676. [Google Scholar] [CrossRef]
  11. Lazonder, A.W.; Walraven, A.; Gijlers, H.; Janssen, N. Longitudinal Assessment of Digital Literacy in Children: Findings from a Large Dutch Single-School Study. Comput. Educ. 2020, 143, 1–8. [Google Scholar] [CrossRef]
  12. De Lone, W.H.; McLean, E.R. Information Systems Success: The Quest for the Dependent Variable. Inf. Syst. Res. 1992, 3, 60–95. [Google Scholar] [CrossRef] [Green Version]
  13. DeLone, W.H.; McLean, E.R. Information Systems Success Measurement; Now Publishers Inc.: Hanover, PA, USA, 2016; ISBN 978-1-68083-143-6. [Google Scholar]
  14. Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Technical Report EBSE 2007-001; Keele University and Durham University Joint Report: Newcastle, UK, 2007. [Google Scholar]
  15. Alarcón, R.; Jiménez, E.P.; Vicente-Yagüe, M.I. Development and Validation of the DIGIGLO, a Tool for Assessing the Digital Competence of Educators. Br. J. Educ. Technol. 2020, 51, 1–15. [Google Scholar] [CrossRef]
  16. Brevik, L.M.; Gudmundsdottir, G.B.; Lund, A.; Strømme, T.A. Transformative Agency in Teacher Education: Fostering Professional Digital Competence. Teach. Teach. Educ. 2019, 86, 1–15. [Google Scholar] [CrossRef]
  17. Demirata, A.; Sadik, O. Design and Skill Labs: Identifying Teacher Competencies and Competency-Related Needs in Turkey’s National Makerspace Project. J. Res. Technol. Educ. 2021, in press. [Google Scholar] [CrossRef]
  18. Hinojo-Lucena, F.-J.; Aznar-Diaz, I.; Caceres-Reche, M.-P.; Trujillo-Torres, J.-M.; Romero-Rodriguez, J.-M. Factors Influencing the Development of Digital Competence in Teachers: Analysis of the Teaching Staff of Permanent Education Centres. IEEE Access 2019, 7, 178744–178752. [Google Scholar] [CrossRef]
  19. Instefjord, E.J.; Munthe, E. Educating Digitally Competent Teachers: A Study of Integration of Professional Digital Competence in Teacher Education. Teach. Teach. Educ. 2017, 67, 37–45. [Google Scholar] [CrossRef]
  20. León, L.D.; Corbeil, R.; Corbeil, M.E. The Development and Validation of a Teacher Education Digital Literacy and Digital Pedagogy Evaluation. J. Res. Technol. Educ. 2021. [Google Scholar] [CrossRef]
  21. Scherer, R.; Siddiq, F.; Tondeur, J. All the Same or Different? Revisiting Measures of Teachers’ Technology Acceptance. Comput. Educ. 2020, 143, 1–17. [Google Scholar] [CrossRef]
  22. Yeung, A.S.; Taylor, P.G.; Hui, C.; Lam-Chiang, A.C.; Low, E.-L. Mandatory Use of Technology in Teaching: Who Cares and so What? Br. J. Educ. Technol. 2012, 43, 859–870. [Google Scholar] [CrossRef]
  23. Cordero, D.; Mory, A. Education in System Engineering: Digital Competence. In Proceedings of the 2019 IEEE 6th International Conference on Industrial Engineering and Applications (ICIEA), Tokyo, Japan, 12–15 April 2019; pp. 677–681. [Google Scholar] [CrossRef]
  24. Engelbrecht, L.; Landes, D.; Sedelmaier, Y. A Didactical Concept for Supporting Reflection in Software Engineering Education. In Proceedings of the IEEE Global Engineering Education Conference (EDUCON), Santa Cruz de Tenerife, Santa Cruz de Tenerife, Spain, 17–20 April 2018; pp. 547–554. [Google Scholar]
  25. Spernjak, A.; Sorgo, A. Outlines for Science Digital Competence of Elementary School Students. In Proceedings of the 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 21–25 May 2018; pp. 825–829. [Google Scholar]
  26. Ng, O.-L.; Liu, M.; Cui, Z. Students’ in-Moment Challenges and Developing Maker Perspectives during Problem-Based Digital Making. J. Res. Technol. Educ. 2021. [Google Scholar] [CrossRef]
  27. Stopar, K.; Bartol, T. Digital Competences, Computer Skills and Information Literacy in Secondary Education: Mapping and Visualization of Trends and Concepts. Scientometrics 2019, 118, 479–498. [Google Scholar] [CrossRef]
  28. Florián-Gaviria, B.; Glahn, C.; Gesa, R.F. A Software Suite for Efficient Use of the European Qualifications Framework in Online and Blended Courses. IEEE Trans. Learn. Technol. 2013, 6. [Google Scholar] [CrossRef]
  29. Kluzer, S.; Priego, L.P. DigComp into Action—Get Inspired, Make It Happen; Carretero, S., Punie, Y., Vuorikari, R., Cabrera, M., O’Keefe, W., Eds.; Publications Office of the European Union: Luxembourg, 2018; ISBN 978-92-79-79901-3. [Google Scholar]
  30. Bartolomé, J.; Garaizar, P.; Larrucea, X. A Pragmatic Approach for Evaluating and Accrediting Digital Competence of Digital Profiles: A Case Study of Entrepreneurs and Remote Workers; Springer: Amsterdam, The Netherlands, 2021; ISBN 0-12-345678-9. [Google Scholar]
  31. Voogt, J.; Roblin, N.P. A Comparative Analysis of International Frameworks for 21st Century Competences: Implications for National Curriculum Policies. J. Curric. Stud. 2012, 44, 299–321. [Google Scholar] [CrossRef] [Green Version]
  32. Pubule, J.; Kalnbalkite, A.; Teirumnieka, E.; Blumberga, D. Evaluation of the Environmental Engineering Study Programme at University. Environ. Clim. Technol. 2019, 23, 310–324. [Google Scholar] [CrossRef] [Green Version]
  33. der Vleuten, C.P.M.V.; Schuwirth, L.W.T. Assessment in the Context of Problem-Based Learning. Adv. Health Sci. Educ. 2019, 24, 903–914. [Google Scholar] [CrossRef] [Green Version]
  34. Brilingaitė, A.; Bukauskas, L.; Juozapavičius, A. A Framework for Competence Development and Assessment in Hybrid Cybersecurity Exercises. Comput. Secur. 2020, 88. [Google Scholar] [CrossRef]
  35. Scherer, R.; Siddiq, F.; Tondeur, J. The Technology Acceptance Model (TAM): A Meta-Analytic Structural Equation Modeling Approach to Explaining Teachers’ Adoption of Digital Technology in Education. Comput. Educ. 2019, 128, 13–35. [Google Scholar] [CrossRef]
  36. Marin-Suelves, D.; Lopez-Gomez, S.; Castro-Rodriguez, M.M.; Rodriguez-Rodriguez, J. Digital Competence in Schools: A Bibliometric Study. IEEE Rev. Iberoam. Tecnol. Aprendiz. 2020, 15, 381–388. [Google Scholar] [CrossRef]
  37. Kadıoğlu-Akbulut, C.; Çetin-Dindar, A.; Küçük, S.; Acar-Şeşen, B. Development and Validation of the ICT-TPACK-Science Scale. J. Sci. Educ. Technol. 2020, 29, 355–368. [Google Scholar] [CrossRef]
  38. Guárdia, L.; Maina, M.; Juliá, A. Digital Competence Assessment System: Supporting Teachers with the CRISS Platform. In Proceedings of the Central European Conference on Information and Intelligent Systems, Varazdin, Croatia, 27–29 September 2017; pp. 77–82. [Google Scholar]
  39. Roegiers, X. Pedagogy of Integration. Education and Training Systems at the Heart of Our Societies; DeBoeck University: Brussels, Belgium, 2010. [Google Scholar]
  40. Petter, S.; de Lone, W.; McLean, E. Measuring Information System Success: Models, Dimensions, Measures, and Interrelationships. Eur. J. Inf. Syst. 2008, 17, 236–263. [Google Scholar] [CrossRef]
  41. Chen, H.J. Linking Employees’ e-Learning System Use to Their Overall Job Outcomes: An Empirical Study Based on the IS Success Model. Comput. Educ. 2010, 55, 1628–1639. [Google Scholar] [CrossRef]
  42. Cidral, W.A.; Oliveira, T.; Felice, M.D.; Aparicio, M. E-Learning Success Determinants: Brazilian Empirical Study. Comput. Educ. 2018, 122, 273–290. [Google Scholar] [CrossRef] [Green Version]
  43. Isaac, O.; Aldholay, A.; Abdullah, Z.; Ramayah, T. Online Learning Usage within Yemeni Higher Education: The Role of Compatibility and Task-Technology Fit as Mediating Variables in the IS Success Model. Comput. Educ. 2019, 136, 113–129. [Google Scholar] [CrossRef]
  44. López, X.; Valenzuela, J.; Nussbaum, M.; Tsai, C.C. Some Recommendations for the Reporting of Quantitative Studies. Comput. Educ. 2015, 91, 106–110. [Google Scholar] [CrossRef]
  45. Straub, D.; Gefen, D. Validation Guidelines for IS Positivist Research. Commun. Assoc. Inf. Syst. 2004, 13, 380–427. [Google Scholar] [CrossRef]
  46. Guest, G.; Namey, E.; McKenna, K. How Many Focus Groups Are Enough? Building an Evidence Base for Nonprobability Sample Sizes. Field Methods 2016, 29, 3–22. [Google Scholar] [CrossRef]
  47. Vogt, D.S.; King, D.W.; King, L.A. Focus Groups in Psychological Assessment: Enhancing Content Validity by Consulting Members of the Target Population. Psychol. Assess. 2004, 16, 231–243. [Google Scholar] [CrossRef] [Green Version]
  48. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2021. [Google Scholar]
  49. Nulty, D.D. The Adequacy of Response Rates to Online and Paper Surveys: What Can Be Done? Assess. Eval. High. Educ. 2008, 33, 301–314. [Google Scholar] [CrossRef] [Green Version]
  50. Ringle, C.M.; Wende, S.; Becker, J.-M. SmartPLS 3; SmartPLS GmbH: Boenningstedt, Germany, 2015. [Google Scholar]
  51. Hair, J.F.; Hult, G.T.M.; Ringle, C.M.; Sarstedt, M. A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM), 2nd ed.; Sage Publications: Thousand Oaks, CA, USA, 2017. [Google Scholar]
  52. Hooper, D.; Coughlan, J.; Mullen, M.R. Structural Equation Modelling: Guidelines for Determining Model Fit. Electron. J. Bus. Res. Methods 2008, 6. [Google Scholar] [CrossRef] [Green Version]
  53. Benitez, J.; Henseler, J.; Castillo, A.; Schuberth, F. How to Perform and Report an Impactful Analysis Using Partial Least Squares: Guidelines for Confirmatory and Explanatory IS Research. Inf. Manag. 2019. [Google Scholar] [CrossRef]
  54. Streukens, S.; Leroi-Werelds, S.; Willems, K. Dealing with Nonlinearity in Importance-Performance Map Analysis (IPMA): An Integrative Framework in a PLS-SEM Context. In Partial Least Squares Path Modeling; Latan, H., Noonan, R., Eds.; Springer International Publishing: Cham, Switzerland, 2017; ISBN 978-3-319-64068-6. [Google Scholar]
  55. Ooi, K.-B.; Hew, J.-J.; Lee, V.-H. Could the Mobile and Social Perspectives of Mobile Social Learning Platforms Motivate Learners to Learn Continuously? Comput. Educ. 2018, 120, 127–145. [Google Scholar] [CrossRef]
  56. Kluzer, S. Guidelines on the Adoption of DigComp; Rissola, G., Ed.; Telecentre Europe: Brussels, Belgium, 2015; pp. 1–28. [Google Scholar]
Figure 1. Research model.
Figure 1. Research model.
Applsci 12 03785 g001
Figure 2. The process of measurement instrument development.
Figure 2. The process of measurement instrument development.
Applsci 12 03785 g002
Figure 3. The revised CRISS success model; * p < 0.05, ** p < 0.01.
Figure 3. The revised CRISS success model; * p < 0.05, ** p < 0.01.
Applsci 12 03785 g003
Figure 4. Importance–performance map of predecessor constructs.
Figure 4. Importance–performance map of predecessor constructs.
Applsci 12 03785 g004
Table 1. Demographic characteristics of the sample (N = 298).
Table 1. Demographic characteristics of the sample (N = 298).
Demographic
Detail
CategoryFrequencyPercent
GenderMale9230.9%
Female20669.1%
WorkplacePrimary school6521.8%
Secondary school23378.2%
AgeUnder 2500%
25–29206.7%
30–398628.9%
40–4911739.3%
50–597123.8%
Over 6041.3%
CountryCroatia5819.5%
Greece5618.8%
Italy5719.1%
Romania258.4%
Spain9230.9%
Sweden103.3%
Teaching
experience
Less than one year51.7%
1–2 years93.0%
3–5 years3913.1%
6–10 years5317.8%
11–15 years7123.8%
16–20 years5016.8%
Over 20 years7123.8%
EducationHigh School Diploma51.7%
Associate’s Degree20.6%
Bachelor’s Degree10434.9%
Master’s Degree17358.1%
Doctorate Degree144.7%
Computer skillFundamental Skills10.3%
Basic Computing and Applications4615.4%
Intermediate Computing and Applications13445.0%
Advanced Computing and Applications6622.2%
Proficient Computing, Applications, and Programming5117.1%
Table 2. Reflective measurement model.
Table 2. Reflective measurement model.
Latent
Variable
IndicatorsItem MeanStandard DeviationConvergent
Validity
Internal Consistency Reliability
LoadingsAVECRCA
System QualitySQ12.811.100.830.630.870.81
SQ22.941.070.78
SQ32.721.020.79
SQ43.310.980.77
Information QualityIQ13.460.960.780.660.920.89
IQ33.131.000.85
IQ43.320.930.85
IQ53.510.920.83
IQ63.261.060.76
IQ73.341.090.79
Service QualitySVQ13.660.940.880.710.900.85
SVQ23.510.970.90
SVQ33.421.030.91
SVQ43.520.830.65
System UseSU52.661.080.760.630.910.88
SU63.021.200.81
SU72.601.090.80
SU83.391.100.76
SU92.631.160.82
SU103.071.190.81
User
Satisfaction
US12.881.120.890.720.930.90
US23.301.130.88
US33.251.130.88
US43.241.050.72
US53.191.060.85
Net ImpactsNI33.381.060.840.630.950.94
NI43.341.040.79
NI53.281.010.84
NI63.500.970.74
NI73.441.020.82
NI82.781.150.82
NI93.121.010.83
NI102.901.040.77
NI113.110.980.85
NI123.570.960.74
NI133.080.990.70
Note. Loadings > 0.60; AVE > 0.50; CR > 0.70; CA > 0.70.
Table 3. Heterotrait–monotrait (HTMT).
Table 3. Heterotrait–monotrait (HTMT).
Information QualityNet
Impacts
Service QualitySystem QualitySystem UseUser
Satisfaction
Information Quality
Net Impacts0.83
Service Quality0.530.53
System Quality0.890.780.51
System Use0.710.740.310.71
User Satisfaction0.890.900.530.870.77
Note. HTMT < 0.90.
Table 4. R2 and Q2 of the endogenous latent variables.
Table 4. R2 and Q2 of the endogenous latent variables.
R2ResultsQ2Results
Net Impacts0.82Substantial0.42Path model’s predictive relevance for particular constructs
System Use0.55Moderate0.26
User Satisfaction0.86Substantial0.50
Table 5. Measurement model fit indices.
Table 5. Measurement model fit indices.
System QualityInformation QualityService QualitySystem UseUser
Satisfaction
Net
Impacts
Chi-square(df)1.54(1)6.48(4)0.06(1)7.35(4)3.76(4)47.51(31)
RMSEA0.0420.0460.0000.0530.0000.042
GFI0.9970.9931.0000.9920.9950.972
AGFI0.9740.9620.9990.9580.9810.940
CFI0.9990.9981.0000.9971.0000.993
Note. Goodness-of-fit index (GFI) > 0.9, adjusted goodness-of-fit index (AGFI) > 0.9, root mean square error of approximation (RMSEA) < 0.06, comparative fit index (CFI) > 0.9, chi-square(df) (degrees of freedom) = the smaller, the better.
Table 6. Structural estimates (hypothesis testing).
Table 6. Structural estimates (hypothesis testing).
HypothesesPath Coefficient (a)t Values (a)f Square (b)Conclusion
H1: System Quality → System Use0.34 **4.190.08Supported
H2: System Quality → User Satisfaction0.26 **4.850.10Supported
H3: Information Quality → System Use0.39 **4.780.10Supported
H4: Information Quality → User Satisfaction0.40 **6.810.21Supported
H5: Service Quality → System Use−0.050.980.00Rejected
H6: Service Quality → User Satisfaction0.10 *2.570.03Supported
H7: System Use → Net Impacts0.19 **4.230.06Supported
H8: System Use → User Satisfaction0.26 **5.930.15Supported
H9: User Satisfaction → Net Impacts0.70 **18.390.90Supported
Note. (a) Bootstrapping with 5000 samples (two-tailed test), * p < 0.05, ** p < 0.01; (b) f square < 0.02 no effect, 0.02–0.15 low effect, 0.15–0.35 medium effect, >0.35 large effect of the exogenous latent variable.
Table 7. Summary of the IPMA for Net Impacts.
Table 7. Summary of the IPMA for Net Impacts.
Predecessor ConstructImportance (Total Effects)Performance
Information Quality0.4358
Service Quality0.0763
System Quality0.3049
System Use0.3447
User Satisfaction0.6154
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Balaban, I.; Sobodić, A. From Teachers’ Perspective: Can an Online Digital Competence Certification System Be Successfully Implemented in Schools? Appl. Sci. 2022, 12, 3785. https://doi.org/10.3390/app12083785

AMA Style

Balaban I, Sobodić A. From Teachers’ Perspective: Can an Online Digital Competence Certification System Be Successfully Implemented in Schools? Applied Sciences. 2022; 12(8):3785. https://doi.org/10.3390/app12083785

Chicago/Turabian Style

Balaban, Igor, and Aleksandra Sobodić. 2022. "From Teachers’ Perspective: Can an Online Digital Competence Certification System Be Successfully Implemented in Schools?" Applied Sciences 12, no. 8: 3785. https://doi.org/10.3390/app12083785

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop