Next Article in Journal
Jointly Optimize the Residual Energy of Multiple Mobile Devices in the MEC–WPT System
Next Article in Special Issue
Technology Enhanced Learning Using Humanoid Robots
Previous Article in Journal
Proposal for a System Model for Offline Seismic Event Detection in Colombia
Previous Article in Special Issue
A Model for Creating Interactive eBooks for eLearning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Challenges and Possibilities of ICT-Mediated Assessment in Virtual Teaching and Learning Processes

by
Esperanza Milena Torres-Madroñero
1,
Maria C. Torres-Madroñero
2,* and
Luz Dary Ruiz Botero
1
1
Faculty of Social Science, Institucion Universitaria Colegio Mayor de Antioquia, Medellin 050012, Colombia
2
MIRP Lab, Instituto Tecnológico Metropolitano, Medellin 050012, Colombia
*
Author to whom correspondence should be addressed.
Future Internet 2020, 12(12), 232; https://doi.org/10.3390/fi12120232
Submission received: 27 November 2020 / Revised: 12 December 2020 / Accepted: 16 December 2020 / Published: 18 December 2020
(This article belongs to the Special Issue E-Learning and Technology Enhanced Learning)

Abstract

:
The transformations in educational environments due to the immersion of information and communication technologies (ICT) make it necessary to analyze the limits and possibilities of the assessment of the virtual training process. This paper presents an analysis of the meanings of ICT-mediated assessment, establishing what kinds of knowledge are suitable for this type of evaluation, and the challenges and possibilities of virtual tools. For this, we present a systematic review of ICT-mediated evaluation and assessment according to the educational paradigms and their implementation. We highlight that contemporary pedagogical models and their implementation in ICT mediation tools show a trend towards quantitative and summative valuation. The commonly used learning management systems (LMS) include several types of questions oriented to quantitative evaluation, with multiple-choice being the most common. However, new technological approaches like gamification, virtual reality and mobile learning open new assessment possibilities. The ICT educational platforms and new technologies demand new skills for all educational actors, such as digital literacy.

1. Introduction

Assessment is the action of assigning a value to generate a judgment on the validity of an action, a process, or a relationship according to socially validated parameters. In education, evaluation mediates the relationships between teachers and students, and between the educational standards regulated at the international and national level and concrete educational practices [1]. In this sense, the evaluative action represents an institutional framework that reflects a place of power in the pedagogical relationship, usually associated with educational quality [2,3]. In this way, educational assessment guarantees the culmination of access to knowledge and, at the same time, is an observable and quantifiable element that facilitates measurement and comparison with others. Educational assessment and evaluation set a standard benchmark that attempts to infer the degree of progress based on these standards [4].
Currently, it is possible to distinguish three major perspectives of educational assessment: the first has an objective and quantifiable character [5]; the second one is hermeneutic and dialectic [6]; and the third is a critical view that questions the relationship between teachers and students [7]. In a particular way, these perspectives locate the relationships among the actors of the teaching process and the way of understanding knowledge and training purposes. Given the evaluative centrality in the teaching and learning processes, it is the subject of extensive discussions of educational paradigms and their approaches [1,2,3,4]. Therefore, assessment is a recent educational research field concerned with its meaning, methods, tools, limits, and possibilities.
Education mediated by information and communication technologies (ICT) raises new debates. Learning technologies incorporate ICT in methodological terms beyond the manipulation of tools [8], including technologies for empowerment and participation of citizens (e-democracy) [9]. These views are an example of the debate introduced by ICT in educational terms. The transformations in the educational environments due to the technological immersion opens new concerns—requiring the establishment of the limits and possibilities of the assessment process in the growing virtual environment, where new relationships between actors and knowledge are positioned [10]. Assessment processes face a new literacy, a multifaceted dimension that is not limited to the writing or reading of words, demanding new skills, and other types of knowledge that alter the relationship with knowledge and the evaluation processes [11]. In this sense, the ICT-mediated assessment demands new skills and questions the educational paradigms that are traditionally recognized.
Previous literature reviews about ICT-mediated assessment have used several approaches and goals. For instance, Charteris et al. [12] presented a discussion about online learning assessment for higher education, specifically from two approaches: performativity and existential learning. On the other hand, Spector et al. [13] presented a synthesis about the role of technology in assessment. Xiong and Suen [14] described the possibilities and challenges of assessment approaches in massive open online courses. Nikou and Economides [15], and Muñoz and González [16] presented literature reviews about mobile-based assessment. On the other hand, Mousavinasab et al. [17] reviewed intelligent tutoring systems and their evaluation methods.
This paper presents a systematic review of works published from 2016 to 2020. Unlike previous literature reviews, this paper focuses on three aspects relevant to understand the capabilities, limitations, and possible gaps in ICT-mediated assessment. First, our work analyzes the purposes and objectives of evaluation, identifying what is being evaluated and why. Second, it determines the areas of knowledge where there are significant developments and use of ICT-mediated assessment, highlighting the areas that need further study. Finally, it identifies digital tools and platforms, their possibilities and limitations, that support online assessment. The aim of this review was to contribute to the improvement of assessment practices for teachers using ICT tools. Additionally, this review is projected as a reference framework for developing educational policies that incorporate virtuality in teaching–learning practices. Finally, this study raises a reflection on the purpose of evaluation, considering how this educational activity can go beyond quantifying knowledge and becoming part of the educational process. To achieve these goals, we focused on three research questions:
  • What are the meanings of ICT-mediated assessment?
  • What kind of knowledge is susceptible to ICT-mediated assessment?
  • What are the assessment possibilities offered by the current ICT platforms?

2. Methodology

The study was conducted using a mixed methodology divided into two phases. Initially, we performed a systematic review of ICT-mediated evaluation and assessment. From this review, we established trends of ICT-mediated assessment according to actors, purposes of assessment, fields of knowledge, educational levels, digital tools, and platforms. In the second phase of this study, we analyzed the most used platforms for teaching and learning mediated by ICT and their possibilities for online assessment.
For the systematic review, we used the PRISMA methodology [18]. The search Equation (1) includes terms related to ICT-mediated learning-teaching processes such as: “eLearning”, “virtual education”, “online education”, “online learning”, and “mobile learning”. Additionally, Equation (1) includes the terms “evaluation” and “assessment” to limit the search to works related to following and scoring the learning process.
((“eLearning” OR “virtual education” OR “online education” OR “online learning” OR “mobile learning”) AND (“evaluation” OR “assessment”)) OR (e-assessment OR eassessment)
This review used three databases: SAGE, SCOPUS, and Taylor&Francis, selected by their high number of journals in the field of social sciences and humanities. The search was limited to research articles published between 2016 to 2020. Conferences proceedings, book chapters, and pre-print papers were not included in the results. Figure 1 describes the step by step of PRISMA methodology.
Using the search Equation (1), we obtained 541 articles in SCOPUS, 302 in Taylor&Francis, and 92 in SAGE Journals. Then, duplicated records were identified using Mendeley, obtaining 863 articles. We screened the titles and abstracts of these articles to select only documents closely related to ICT-mediated assessment. Table 1 summarized the inclusion and exclusion criteria employed for paper selection. We excluded 656 documents from the title and abstract screening. Most of the excluded papers report evaluation of online platforms or digital tools. It was also common to find the design of courses, online platforms, or digital contents without details about assessment approaches. Then, 207 full-text articles were assessed, considering the inclusion and exclusion criteria of Table 1 again, finding that some text did not meet these criteria and were not detected in the initial screening due to the lack of information in their titles and abstracts. Finally, the systematic review included 150 documents. This study established ten categories to classify the research papers and synthesize the trends. These categories are described in detail in the results.
In the second phase of this study, we analyzed the most used ICT platforms for teaching and learning processes. We included four LMS (learning management systems), widely used in higher education [19]: Moodle, DOKEOS, Caroline, and SAKAI. Additionally, we incorporated Microsoft Teams and Google Classroom. In each of these platforms, the available tools for evaluation are analyzed. We identify limitations and possibilities comparing these LMS with the digital tools and platforms detected in the systematic review.

3. Results

This section describes the results of the systematic review and comparison of LMS. First, we present the categories for the quantitative synthesis and meta-analysis of selected papers. These categories are related with the research questions of this study. The analysis of the results and the answers to each of the research questions are presented in the discussion section.

3.1. Systematic Review of ICT-Mediated Assessment

According to the selected papers and the first analysis of these texts, we defined ten categories related to the three research questions. Table 2 presents the relation between the categories and the research questions and the type of analysis used for each one.
Figure 2 presents the distribution of the selected documents per year of publications. For 2016, we obtained 34 documents using the inclusion and exclusion criteria of Table 2. The number of documents was the highest for 2017 (38 research articles). Additionally, we retrieved 24 documents for 2018 and 2019, respectively; and, 30 documents for 2020. Figure 3 shows the distribution of retrieved research articles per continent. The highest number of publications were obtained from Europe (53 documents). In this region, the countries with more ICT-mediated assessment publications were England and Spain, with 15 and 14 articles, respectively. We retrieved 50 documents from Asia; the country with more publications in this continent was China with 12 articles. On the other side, we obtained 21 articles from North America, finding the largest publication in the United States of America (19 documents). From Africa, Oceania, and South America, we obtained 13, 9, and 4 research articles, respectively.
The selected papers present descriptive, experimental, perception, case studies, and some works describing technological development, such as digital tools, platforms, web, or mobile applications. Figure 4 shows the percentages of documents in each type of study. Descriptive studies analyze how the use of digital tools or platforms affect student performance [20,21,22,23,24], describe online course designs and their assessment components [25,26], as well as present qualitative analyses of the possibilities and limitations of virtual assessment [27,28,29,30,31,32,33], and analysis of reliability and validity [34,35]. Of the selected articles, 39 correspond to descriptive studies. On the other hand, experimental studies (31 selected articles) compare the performance of the students using ICT-mediated assessment strategies [36,37,38,39,40]; some of these studies use control groups that develop the assessment activities as usual in the course [41,42,43,44,45,46,47], for example, in a face-to-face setting [48,49,50]. We also found 34 studies of perception, which seek to determine the acceptability, engagement, opinion, and valuation of digital tools, platforms, or ICT-mediated assessment strategies from students and teachers (e.g., [51,52,53,54]). Other articles present the design and implementation of digital tools, web or mobile platforms, or software supporting ICT-mediated assessment; 40 articles were found in this category (e.g., [55,56,57,58]). Finally, the selected articles include 6 case studies, which use few data or a small sample population to analyze online evaluation [59,60,61].

3.1.1. What Are the Meanings of ICT-Mediated Assessment?

To answer the question about the meanings and interpretation of ICT-meditated assessment, we establish three categories (Table 2). The first one determines the actors who are given relevance in research about ICT-mediated assessment. From this category, we quantified how many studies are oriented only to a single actor (i.e., students or teachers) and how many consider both perspectives. Figure 5 presents the percentages for studies considering students, teachers, or both.
Student-centered works seek to measure students’ performance or progress once ICT-mediated assessments are implemented and establish their opinions and perceptions about this assessment approach. Of the selected articles, 115 are focused on the student as the main actor in the research. On the other hand, there are few studies focused exclusively on teachers. In this systematic review, only eleven articles were oriented to the teachers. Some of these studies are aimed at: decreasing the possibility of copying or other types of misconduct during evaluations [62,63]; facilitating the grading of written texts [64,65] or automating this task [57,63]; reducing the bias of the grading process [40]; as well as, knowing the opinions and perceptions of teachers regarding the use of online assessment activities [66,67,68].
Additionally, some studies seek to analyze both perspectives (24 papers in this review). Examples of these researches include the adaptation of assessment activities to great scale courses [69]; the implementation of strategies based on peer-assessment [30,70], collaborative task [71], gamification [56], e-portfolios [72], online laboratories [73], or mobile device [52]; and, the impact evaluation of automatic grading tools in the training process [74]. We also found studies about the perception [75] and the need of multi-literacy skills to design and implement digital courses for teachers and to develop the courses for students [25,73].
The second category asks what is being evaluated: contents, skills, or outcomes. Figure 6 shows the percentages for each type of assessed element. Case studies and perception papers were not considered for computing these percentages since these usually do not include details about what is being evaluated. Additionally, 12 papers cannot be categorized. About skills evaluation, 48 papers present ICT-mediated assessment tools or strategies. The evaluated skills include writing [22,48,70,74,76,77], communication in foreign languages [38,63,78,79], programing [40,45,80,81], problem solving [37,82], critical thinking [78,83,84], pedagogy [20,85], reading [86] skills, and others. A total of 45 papers are oriented to knowledge or contents evaluations, and only three papers to outcome assessment.
Finally, we analyzed the purposes of the ICT-mediated assessment. Several papers focus on measuring student performance according to specific knowledge and skills (e.g., [32,51,59]). However, less traditional perspectives are also found in the literature, such as those oriented to formative evaluation (e.g., [43,44,68]). Formative assessment is a student-centered approach that part of the needs and capabilities of students to design the learning process [44], where the effective feedback in the evaluation activities plays a central role [59], as well as the student motivation for the development of activities [24]. In the formative assessment, results are used to know the progress and identified factors that affect the learning process [24]. Formative assessment studies include developing and applying strategies for the customization of assessment activities according to the student’s needs and levels [30,36,55,63,87,88]. Other works consider the assessment as an instrument to increase student engagement in the courses. Here we can see the implementation of tools like e-portfolios [20,21], mobile learning [89], and peer-assessment strategies (e.g., [22,49,71]). Peer-assessment involves both the students and teachers in the grading process. This assessment approach is useful for massive open online courses (MOOC) [49,90], but also in traditional learning settings since it improves the student engagement in its learning process [91]. Peer-assessment creates a collaborative environment [71]. However, some authors question their reliability and validity [22,34] due to the effect of personal interest in the grading [40].

3.1.2. What Kind of Knowledge Is Susceptible to ICT-Mediated Assessment?

To answer the question about the kind of knowledge that is susceptible to ICT-mediated assessment, we identified both the fields of knowledge and training levels (Table 2) to which the selected studies are oriented. Some of the articles could not be categorized, given the lack of student characterization. The categorization of the field of knowledge is based on the subject or degree program to which the students under study belong. The fields of knowledge included engineering, science, foreign language, education, health, economic and administrative science, art and humanities, and social sciences. Figure 7 shows the number of papers categorized according to these fields of knowledge. Students belong to engineering degrees were included in 32 papers. Another frequent field of knowledge (21 articles) is sciences (i.e., physics, biology, exact sciences). Foreign language is also a field of interest to incorporated ICT-mediated assessment, mainly in English as a second language (e.g., [48,76]). The area with the least number of studies (only three papers) is the social sciences [26,64,92]. Figure 7 shows a trend towards the use and development of ICT-mediated assessments in STEM (Science, Technology, Engineering, and Math). We found 11 articles that develop higher education studies, 22 oriented to k-12 (primary and secondary) education, and 17 that could not be categorized.

3.1.3. What Are the Assessment Possibilities Offered by the Current ICT Platforms?

We establish two categories to answer the question about the ICT-mediated assessment possibilities (Table 2). First, we analyzed the digital tools and strategies to incorporate ICT-mediated evaluation. Then, we analyze the pedagogical approaches considered in the studies.
From the review, we can identify several technological tools that enhance the assessment process, for instance, the use of e-portfolios and mobile devices. Education programs use E-portfolios for teacher training. However, these have the potential to be used in several fields of knowledge [20,21,93]. Students build E-portfolios to demonstrate skills acquired in the training process. One of its advantages is the opportunity for constant monitoring of student progress, allowing the personalization of the learning process [20,21]. While E-portfolios are assessment instruments, they also allow students to develop documentation and reporting skills [93]. However, its implementation requires teachers and students to use several digital tools, highlighting the importance of digital literacy [20,21].
On the other hand, mobile learning is a trend in educational research that seeks to exploit the ubiquity of mobile devices, such as cell phones and tablets, to encourage and enhance learning. The use of mobile devices for learning processes allows the development of evaluation activities of various types. These may include the traditional multiple-choice questionnaires [94], but also the construction of multimedia material (photographs, audios, and videos) to evidence the appropriation of knowledge and development of skills [94]. Teachers can promote a collaborative environment using mobile devices [94] but also can use these devices to develop outdoor assessment activities [89,95], highlighting the possibility to evaluate at any time and any place [95]. A standard research question is the willingness and acceptability of students to use their mobile phones as assessment tools; the studies report both positive and negative perceptions [52,89,94,96,97].
Additionally, there are other strategies for the incorporation of ICT-mediated assessment in learning processes, such as self-assessment, peer-assessment, gamification, augmented reality, learning analytics, adaptive assessment, and automated assessment. Table 3 summarizes these key strategies and challenges.
In addition to the tools mentioned above, the systematic review revealed several platforms and software developed or used to improve assessment activities. Table 4 summarizes some of the software and platforms identified in the review.
Finally, the last category classified the research documents according to the pedagogical approaches. We cluster the pedagogical model as traditional, nontraditional, and critical, as described in Table 5 [122,123,124]. Traditional pedagogical models are teacher-centered that use summative assessments to measure and to compare student performance [122]. Instead, nontraditional models (such as experimental, new-school, developmental and constructivist) are student-centered, teachers are facilitators of the learning process, and assessment is a tool for recognizing weaknesses and potentialities to enhance the learning process [123]. In educational research, there is also the critical pedagogical model where self-reflection and identifying their own potential and needs are the basis for the training process [124].
The papers were categorized into traditional and nontraditional approaches. The systematic review did not obtain any studies based on the critical pedagogical approach. Some evaluation strategies or tools were oriented to traditional and nontraditional approaches; therefore, they were grouped into one category. Figure 8 presents the distribution of the selected articles in the pedagogical models. A trend was identified towards traditional approaches that include summative assessments, which seek to measure student performance or knowledge.

3.2. LMS Evaluation Tools

We analyzed the characteristics and digital tools included in some commonly used LMS platforms. Table 6 summarizes the type of questions and general configuration included in Moodle, DOKEOS, Caroline, SAKAI, Microsoft Teams, and Google Classroom. These evaluation tools are characterized by different types of question configurations, which can construct quizzes and questionnaires. These LMS evidence that technologies form an ecosystem of their own that transgresses the relationships between actors and knowledge. In this sense, evaluative practices are also affected in this context, inviting other readings and sensitivities. These platforms alter the synchronic relationship, making time and sequential access to content more flexible.

4. Discussion

We can observe that the meaning of assessment has, on the one hand, a traditional approach, where evaluation is used to measure performance and to standardize the knowledge. However, on the other hand, there is an interest in new educative approaches inspired by ICT. Incorporating technological tools decentralizes the teacher’s role in the assessment and creates other possibilities, such as peer-evaluation (see Table 3). There is also a migration from real-time to asynchronous assessments, where the hierarchical relationship of teacher and students is displaced (e.g., [52,74]). In this sense, a question arises about the role of the teacher in the virtuality [25,73]. The teacher becomes a digital content producer, requiring skills related to the management of digital tools and skills related with new literacies that allow taking advantage of these tools in the learning process (e.g., [24,43,44,59,68]). We found studies where the evaluation moves away from its grading purpose and becomes a diagnostic instrument for customizing the learning process [36,55,87,88,101,102,103,104,105]. Through the assessment, the teachers identify the strengths and needs of the students and define student profiles.
Most researches were student-centered, focusing on the student role in their formative process. The perception studies show the importance of student engagement in their learning process and their relevance to teachers, digital tools, and learning contents [51,52,53,54].
Despite the diversity of technological possibilities, commonly used LMS are limited to quantitative evaluation, being the multiple-choice questions the key players. Although some of these LMS incorporated analytical and quantitative options such as testing, matching questions, and embedded responses, these platforms promote the summative evaluation, limiting the feedback to a predefined sentence. Therefore, there are multiple but scattered digital tools, which allow enhancing learning from assessment. However, these tools are conditioned to the teacher’s skills to use and incorporate them efficiently in the training processes.
On the other hand, this review evidenced that the STEM (science, technology, engineering, and math) fields incorporate more frequently ICT-mediated assessment than other areas such as social sciences. Several authors highlight the gap of ICT-integration on social science education ([99,125]). Questions that involve mathematical calculations and analysis are easier to implement in LMS than questions that assess a student’s critical thinking. Here, teacher training in both pedagogical and technological issues plays a fundamental role. However, other tools are being introduced for ICT-mediated evaluation for a broad spectrum of knowledge areas. Although not yet embedded in LMS, these tools include gamification [56,70,100], augmented reality [43,50], social networks [54] and mobile platforms [15,37,52,71,89,94,95,96,97,99] offering more flexible options for the development of qualitative assessments that truly evaluate critical thinking and skill development.
Concerning the assessment possibilities, the ICT platforms provide new scenarios for interaction and knowledge construction, appealing to multiple senses: sound, visual and narrative. Incorporating other senses requires several skills that transgress the written culture present in the traditional evaluation paradigm. ICT allows asynchrony, promoting reflections about their meaning: what is being evaluated, what for, and how [95]. Paradoxically, the traditional evaluative practices are more closely with commonly used LMS. However, the opening of other forms of reading and writing, the asynchronous experience, and the reconfiguration of the learning spaces invite us to consider other assessment forms that transcend the informative level and give rise to critical analysis, argumentation, and the reflexive appropriation of knowledge.
An example of technological tools with a significant presence in the selected studies were mobile technologies [15,37,52,71,89,94,95,96,97,99]. This type of device expands the forms and structure of the assessment activities. In mobile learning, the evaluation is not limited in space and time, changing the scenarios where the student performs the evaluation. These activities are involved in the daily life of the student.
Despite these advances, there is still a gap in addressing the critical or emancipatory perspectives of education. There is no evidence of ICT-mediation and ICT-assessment incorporation into critical education approaches. Additionally, there is a new challenge for institutions: while assessment practice has been transformed, the institutional expectation of assessment reimaging is unchanged. For educative institutions, the assessment is an indicator for standardization. Here, we found a need to design educational policies to be aligned with the new educational possibilities.
The ICT educational platforms demand new skills for all educational actors [112]—firstly, digital literacy contributes to the use of the available tools and familiarizing their uses; secondly, there is a demand to think of strategies that strengthen the practices of feedback evaluation [54]. Although the available tools allow communication with students and teachers, the evaluation exercises mediated by technologies often limit the monitoring, dialogue, and discussion. In this sense, works that incorporate teachers’ reflection and technology management are required.

5. Conclusions

This paper analyzed the perspectives of ICT-mediated assessment from a systematic review of work published from 2006 to 2020 and a comparison of widely used LMS. The systematic review was developed using a PRISMA methodology using research papers on SAGE, SCOPUS and Taylor&Francis databases. Once we removed duplicated records and documents that did not meet the inclusion and exclusion criteria, 150 research papers related to the development or incorporation of ICT-mediated assessment were included in the analysis. This study defined eight categories used to classify the documents and to address the research questions.
The first was a quantitative synthesis of the type of studies found in the literature. This analysis evidenced the scientific community’s interest in developing experimental, descriptive, and perception studies, as well as research involving technological development.
The second category analyzed the actors on which the research focuses: students, teachers, or both. We noted that most studies are student-centered, evidencing a change in the role of students in their formative process. However, the few studies oriented towards teachers show the need: first, to define the new role of teachers in virtual learning environments; and second, to determine the skills and knowledge that teachers require for the development of their role in the learning process.
We also analyzed the evaluation of contents, skills, and outcomes, and the purpose of this evaluation. We found that, while the traditional evaluation approach to measuring and standardizing knowledge levels is persistent, there are also nontraditional approaches that make use of ICT to explore new evaluation possibilities and interpretations. Here, we highlight the studies that orient the evaluation towards the diagnosis of students for the personalization of the formative process and others that place students as the protagonists of their formation, with strategies such as peer-assessment.
The review showed the tendency to develop and incorporate ICT-mediated assessment in STEM areas. However, the diversity of digital tools is an opportunity for the social sciences and other areas.
We found a high number of digital tools and applications, enabling the diversity of the assessment activities. We found both traditional settings, based on questioning, and more creative and interactive ways of assessment. However, this growing technological development imposes a challenge to the learning process actors: institutions, teachers, and students. It is necessary to define new educational approaches that allow the efficient incorporation of ICT at the institutional level. For their part, teachers require the acquisition of new skills to use digital tools and design significant evaluation activities that use them. Students are also forced to acquire those digital skills that enable them to new forms of education.
Finally, we can note some limitations of this study. On one side, there are limitations related to the methodology since the results depend on the research question, the database, and the inclusion and exclusion criteria. This study described a clear and precise search equation related to the research questions. However, the search equation can be posed differently, revealing other studies given each database’s search algorithms. The selected databases also limit the study to the articles indexed in them.
On the other hand, we find a limitation related to the language of the publication. The selected databases include most articles in English. Therefore, studies in other languages are not reflected in this review. For example, there was a lag of studies in ICT-mediated assessment in regions such as Latin America.

Author Contributions

Conceptualization, E.M.T.-M., M.C.T.-M., and L.D.R.B.; methodology, E.M.T.-M., L.D.R.B., and M.C.T.-M.; formal analysis, E.M.T.-M., and M.C.T.-M.; investigation, E.M.T.-M., L.D.R.B., and M.C.T.-M.; writing—original draft preparation, E.M.T.-M., M.C.T.-M., and L.D.R.B.; writing—review and editing, M.C.T.-M.; funding acquisition, E.M.T.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Minciencias, project PAZRED: Collective memory fabrics in the West of Antioquia, grant number FP44842-459-2018.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Richmond, G.; Salazar, M.; Jones, N. Assessment and the Future of Teacher Education. J. Teach. Educ. 2019, 70, 86–89. [Google Scholar] [CrossRef] [Green Version]
  2. Sadler, D. Assessment & Evaluation in Higher Education Interpretations of criteria-based assessment and grading in higher education. Assess. Eval. High. Educ. 2005, 30, 175–194. [Google Scholar] [CrossRef]
  3. Noaman, A.; Ragab, A.; Madbouly, A.; Khedra, A.; Fayoumi, A. Higher education quality assessment model: Towards achieving educational quality standard. Stud. High. Educ. 2017, 42, 23–46. [Google Scholar] [CrossRef]
  4. Braun, H.; Singer, J. Assessment for monitoring of education systems: International comparisons. Ann. Am. Acad. Pol. Soc. Sci. 2019, 683, 75–92. [Google Scholar] [CrossRef]
  5. Huang, X.; Hu, Z. On the Validity of Educational Evaluation and Its Construction. High. Educ. Stud. 2015, 5, 99–105. [Google Scholar] [CrossRef] [Green Version]
  6. Penuel, W. A dialogical epistemology for educational evaluation. NSSE Yearb. 2010, 109, 128–143. [Google Scholar]
  7. Kim, K.; Seo, E. The relationship between teacher efficacy and students’ academic achievement: A meta-analysis. Soc. Behav. Personal. 2018, 46, 529–540. [Google Scholar] [CrossRef]
  8. Liu, Q.; Geertshuis, S.; Grainger, R. Understanding academics’ adoption of learning technologies: A systematic review. Comput. Educ. 2020, 151, 103857. [Google Scholar] [CrossRef] [Green Version]
  9. Bouzguenda, I.; Alalouch, C.; Fava, N. Towards smart sustainable cities: A review of the role digital citizen participation could play in advancing social sustainability. Sustain. Cities Soc. 2019, 50, 101627. [Google Scholar] [CrossRef]
  10. Hernández, R.; Cáceres, I.; Zarate, J.; Coronado, D.; Loli, T.; Arévalo, G. Information and Communication Technology (ICT) and Its Practice in Educational Evaluation. J. Educ. Psychol. 2019, 7, 6–10. [Google Scholar] [CrossRef]
  11. Mioduser, D.; Nachmias, R.; Forkosh-Baruch, A. New Literacies for the Knowledge Society. In Second Handbook of Information Technology in Primary and Secondary Education; Voogt, J., Knezek, G., Christensen, R., Lai, K.W., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 23–42. ISBN 978-0-387-73315-9. [Google Scholar]
  12. Charteris, J.; Quinn, F.; Parkes, M.; Fletcher, P.; Reyes, V. e-Assessment for learning and performativity in higher education: A case for existential learning. Australas. J. Educ. Technol. 2016, 32, 112–122. [Google Scholar] [CrossRef] [Green Version]
  13. Spector, J.M.; Ifenthaler, D.; Sampson, D.; Yang, J.L.; Mukama, E.; Warusavitarana, A.; Lokuge, K.; Eichhorn, K.; Fluck, A.; Huang, R.; et al. technology enhanced formative assessment for 21st century learning. J. Educ. Technol. Soc. 2016, 19, 58–71. [Google Scholar]
  14. Xiong, Y.; Suen, H.K. Assessment approaches in massive open online courses: Possibilities, challenges and future directions. Int. Rev. Educ. 2018, 64, 241–263. [Google Scholar] [CrossRef]
  15. Nikou, S.A.; Economides, A.A. Mobile-based assessment: A literature review of publications in major referred journals from 2009 to 2018. Comput. Educ. 2018, 125, 101–119. [Google Scholar] [CrossRef]
  16. Muñoz, J.; González, C. Evaluación en Sistemas de Aprendizaje Móvil: Una revisión de la literatura. Rev. Ibérica Sist. Tecnol. Inf. 2019, E22, 187–199. [Google Scholar]
  17. Mousavinasab, E.; Zarifsanaiey, N.R.; Niakan, S.; Rakhshan, M.; Keikha, L.; Ghazi, M. Intelligent tutoring systems: A systematic review of characteristics, applications, and evaluation methods. Interact. Learn. Env. 2018. [Google Scholar] [CrossRef]
  18. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLOS Med. 2009, 6. [Google Scholar] [CrossRef] [Green Version]
  19. Kasim, N.; Khalid, F. Choosing the Right Learning Management System (LMS) for the Higher Education Institution Context: A Systematic Review. Int. J. Emerg. Technol. Learn. 2016, 11, 55–61. [Google Scholar] [CrossRef] [Green Version]
  20. Makokotlela, M.V. An E-Portfolio as an Assessment Strategy in an Open Distance Learning Context. Int. J. Inf. Commun. Technol. Educ. 2020, 16, 122–134. [Google Scholar] [CrossRef]
  21. Chen, S.Y.; Tseng, Y.F. The impacts of scaffolding e-assessment English learning: A cognitive style perspective. Comput. Assist. Lang. Learn. 2019. [Google Scholar] [CrossRef]
  22. Formanek, M.; Wenger, M.C.; Buxner, S.R.; Impey, C.D.; Sonam, T. Insights about large-scale online peer assessment from an analysis of an astronomy MOOC. Comput. Educ. 2017, 113, 243–262. [Google Scholar] [CrossRef]
  23. Wimmer, H.; Powell, L.; Kilgus, L.; Force, C. Improving Course Assessment via Web-based Homework. Int. J. Online Pedagog. Course Des. 2017, 7, 1–19. [Google Scholar] [CrossRef]
  24. Steif, P.S.; Fu, L.; Kara, L.B. Providing formative assessment to students solving multipath engineering problems with complex arrangements of interacting parts: An intelligent tutor approach. Interact. Learn. Environ. 2016, 24, 1864–1880. [Google Scholar] [CrossRef]
  25. Martin, F.; Ritzhaupt, A.; Kumar, S.; Budhrani, K. Award-winning faculty online teaching practices: Course design, assessment and evaluation, and facilitation. Internet High. Educ. 2019, 42, 34–43. [Google Scholar] [CrossRef]
  26. Lee, Y.; Rofe, J.S. Paragogy and flipped assessment: Experience of designing and running a MOOC on research methods. Open Learn. 2016, 31, 116–129. [Google Scholar] [CrossRef] [Green Version]
  27. García-Peñalvo, F.J.; Corell, A.; Abella-García, V.; Grande, M. Online assessment in higher education in the time of COVID-19. Educ. Knowl. Soc. 2020. [Google Scholar] [CrossRef]
  28. Thoma, B.; Turnquist, A.; Zaver, F.; Hall, A.K.; Chan, T.M. Communication, learning and assessment: Exploring the dimensions of the digital learning environment. Med. Teach. 2019, 41, 385–390. [Google Scholar] [CrossRef]
  29. Hills, L.; Clarke, A.; Hughes, J.; Butcher, J.; Shelton, I.; McPherson, E. Chinese whispers? Investigating the consistency of the language of assessment between a distance education institution, its tutors and students. Open Learn. 2018, 33, 238–249. [Google Scholar] [CrossRef]
  30. Leppisaari, I.; Peltoniemi, J.; Hohenthal, T.; Im, Y. Searching for effective peer assessment models for improving online learning in HE–Do-It-Yourself (DIY) case. J. Interact. Learn. Res. 2018, 29, 507–528. [Google Scholar]
  31. Alizadeh, T.; Tomerini, D.; Colbran, S. Teaching planning studios: An online assessment task to enhance the first year experience. J. Plan. Educ. Res. 2017, 37, 234–245. [Google Scholar] [CrossRef]
  32. Fenton-O’Creevy, M.; van Mourik, C. ‘I understood the words but I didn’t know what they meant’: Japanese online MBA students’ experiences of British assessment practices. Open Learn. 2019, 31, 130–140. [Google Scholar] [CrossRef]
  33. Hills, L.; Hughes, J. Assessment worlds colliding? Negotiating between discourses of assessment on an online open course. Open Learn. 2016, 31, 108–115. [Google Scholar] [CrossRef] [Green Version]
  34. Garcia-Loro, F.; Martin, S.; Ruiperez-Valiente, J.A.; San Cristobal, E.; Castro, M. Reviewing and analyzing peer review Inter-Rater Reliability in a MOOC platform. Comput. Educ. 2020, 154, 103894. [Google Scholar] [CrossRef]
  35. Li, X. Self-assessment as ‘assessment as learning’in translator and interpreter education: Validity and washback. Interpret. Transl. Train. 2018, 12, 48–67. [Google Scholar] [CrossRef]
  36. Chrysafiadi, K.; Troussas, C.; Virvou, M. Combination of fuzzy and cognitive theories for adaptive e-assessment. Expert Syst. Appl. 2020, 161, 113614. [Google Scholar] [CrossRef]
  37. Chiu, P.S.; Pu, Y.H.; Kao, C.C.; Wu, T.T.; Huang, Y.M. An authentic learning based evaluation method for mobile learning in Higher Education. Innov. Educ. Teach. Int. 2018, 55, 336–347. [Google Scholar] [CrossRef]
  38. Tsai, S.C. Effectiveness of ESL students’ performance by computational assessment and role of reading strategies in courseware-implemented business translation tasks. Comput. Assist. Lang. Learn. 2017, 30, 474–487. [Google Scholar] [CrossRef]
  39. Cohen, D.; Sasson, I. Online quizzes in a virtual learning environment as a tool for formative assessment. J. Technol. Sci. Educ. 2016, 6, 188–208. [Google Scholar]
  40. Wang, Y.; Liang, Y.; Liu, L.; Liu, Y. A multi-peer assessment platform for programming language learning: Considering group non-consensus and personal radicalness. Interact. Learn. Environ. 2016, 24, 2011–2031. [Google Scholar] [CrossRef]
  41. Lajane, H.; Gouifrane, R.; Qaisar, R.; Noudmi, F.; Lotfi, S.; Chemsi, G.; Radid, M. Formative e-Assessment for Moroccan Polyvalent Nurses Training: Effects and Challenges. Int. J. Emerg. Technol. Learn. 2020, 15, 236–251. [Google Scholar] [CrossRef]
  42. Astalini, A.; Darmaji, D.; Kurniawan, W.; Anwar, K.; Kurniawan, D. Effectivenes of Using E-Module and E-Assessment. International Association of Online Engineering. Int. J. Inf. Manag. 2019, 13, 21–38. [Google Scholar]
  43. Bhagat, K.K.; Liou, W.K.; Michael Spector, J.; Chang, C.Y. To use augmented reality or not in formative assessment: A comparative study. Interact. Learn. Environ. 2019, 27, 830–840. [Google Scholar] [CrossRef]
  44. Robertson, S.N.; Humphrey, S.M.; Steele, J.P. Using Technology Tools for Formative Assessments. J. Educ. Online 2019, 16, 2. [Google Scholar] [CrossRef]
  45. Amasha, M.A.; Abougalala, R.A.; Reeves, A.J.; Alkhalaf, S. Combining Online Learning & Assessment in synchronization form. Educ. Inf. Technol. 2018, 23, 2517–2529. [Google Scholar] [CrossRef]
  46. Lin, C.Y.; Wang, T.H. Implementation of personalized e-Assessment for remedial teaching in an e-Learning environment. Eurasia J. Math. Sci. Technol. Educ. 2017, 13, 1045–1058. [Google Scholar] [CrossRef]
  47. Petrović, J.; Pale, P.; Jeren, B. Online formative assessments in a digital signal processing course: Effects of feedback type and content difficulty on students learning achievements. Educ. Inf. Technol. 2017, 22, 3047–3061. [Google Scholar] [CrossRef]
  48. Vakili, S.; Ebadi, S. Exploring EFL learners developmental errors in academic writing through face-to-Face and Computer-Mediated dynamic assessment. Comput. Assist. Lang. Learn. 2019, 1–36. [Google Scholar] [CrossRef]
  49. Usher, M.; Barak, M. Peer assessment in a project-based engineering course: Comparing between on-campus and online learning environments. Assess. Eval. High. Educ. 2018, 43, 745–759. [Google Scholar] [CrossRef]
  50. Mumtaz, K.; Iqbal, M.M.; Khalid, S.; Rafiq, T.; Owais, S.M.; Al Achhab, M. An E-assessment framework for blended learning with augmented reality to enhance the student learning. Eurasia J. Math. Sci. Technol. Educ. 2017, 13, 4419–4436. [Google Scholar] [CrossRef]
  51. Bahar, M.; Asil, M. Attitude towards e-assessment: Influence of gender, computer usage and level of education. Open Learn. 2018, 33, 221–237. [Google Scholar] [CrossRef]
  52. Al-Emran, M.; Salloum, S.A. Students’ attitudes towards the use of mobile technologies in e-Evaluation. Int. J. Interact. Mob. Technol. 2017, 11, 195–202. [Google Scholar] [CrossRef]
  53. Cakiroglu, U.; Erdogdu, F.; Kokoc, M.; Atabay, M. Students’ Preferences in Online Assessment Process: Influences on Academic Performances. Turk. Online J. Distance Educ. 2017, 18, 132–142. [Google Scholar] [CrossRef]
  54. McCarthy, J. Enhancing feedback in higher education: Students’ attitudes towards online and in-class formative assessment feedback models. Act. Learn. High. Educ. 2017, 18, 127–141. [Google Scholar] [CrossRef]
  55. Becerra-Alonso, D.; Lopez-Cobo, I.; Gómez-Rey, P.; Fernández-Navarro, F.; Barbera, E. EduZinc: A tool for the creation and assessment of student learning activities in complex open, online, and flexible learning environments. Distance Educ. 2020, 41, 86–105. [Google Scholar] [CrossRef]
  56. Gañán, D.; Caballé, S.; Clarisó, R.; Conesa, J.; Bañeres, D. ICT-FLAG: A web-based e-assessment platform featuring learning analytics and gamification. Int. J. Web Inf. Syst. 2017, 13, 25–54. [Google Scholar] [CrossRef]
  57. Farias, G.; Muñoz de la Peña, D.; Gómez-Estern, F.; De la Torre, L.; Sánchez, C.; Dormido, S. Adding automatic evaluation to interactive virtual labs. Interact. Learn. Environ. 2016, 24, 1456–1476. [Google Scholar] [CrossRef]
  58. Mora, N.; Caballe, S.; Daradoumis, T. Providing a multi-fold assessment framework to virtualized collaborative learning in support for engineering education. Int. J. Emerg. Technol. Learn. 2016, 11, 41–51. [Google Scholar] [CrossRef] [Green Version]
  59. Barra, E.; López-Pernas, S.; Alonso, Á.; Sánchez-Rada, J.F.; Gordillo, A.; Quemada, J. Automated Assessment in Programming Courses: A Case Study during the COVID-19 Era. Sustainability 2020, 12, 7451. [Google Scholar] [CrossRef]
  60. Sekendiz, B. Utilisation of formative peer-assessment in distance online education: A case study of a multi-model sport management unit. Interact. Learn. Env. 2018, 26, 682–694. [Google Scholar] [CrossRef]
  61. Watson, C.; Wilson, A.; Drew, V.; Thompson, T.L. Small data, online learning and assessment practices in higher education: A case study of failure? Assess. Eval. High. Educ. 2017, 42, 1030–1045. [Google Scholar] [CrossRef] [Green Version]
  62. Albazar, H. A New Automated Forms Generation Algorithm for Online Assessment. J. Inf. Knowl. Manag. 2020, 19, 2040008. [Google Scholar] [CrossRef]
  63. Tan, P.J.; Hsu, M.H. Designing a system for English evaluation and teaching devices: A PZB and TAM model analysis. Eurasia J. Math. Sci. Technol. Educ. 2018, 14, 2107–2119. [Google Scholar] [CrossRef]
  64. Saha, S.K.; Rao, D. Development of a practical system for computerized evaluation of descriptive answers of middle school level students. Interact. Learn. Env. 2019. [Google Scholar] [CrossRef]
  65. Jayashankar, S.; Sridaran, R. Superlative model using word cloud for short answers evaluation in eLearning. Educ. Inf. Technol. 2017, 22, 2383–2402. [Google Scholar] [CrossRef]
  66. Mimirinis, M. Qualitative differences in academics’ conceptions of e-assessment. Assess. Eval. High. Educ. 2019, 44, 233–248. [Google Scholar] [CrossRef]
  67. Babo, R.; Suhonen, J. E-assessment with multiple choice questions: A qualitative study of teachers’ opinions and experience regarding the new assessment strategy. Int. J. Learn. Technol. 2018, 13, 220–248. [Google Scholar] [CrossRef]
  68. Zhan, Y.; So, W.W. Views and practices from the chalkface: Development of a formative assessment multimedia learning environment. Technol. Pedagog. Inf. 2017, 26, 501–515. [Google Scholar] [CrossRef]
  69. Su, H. Educational Assessment of the Post-Pandemic Age: Chinese Experiences and Trends Based on Large-Scale Online Learning. Educ. Meas. 2020, 39, 37–40. [Google Scholar] [CrossRef]
  70. Tenorio, T.; Bittencourt, I.I.; Isotani, S.; Pedro, A.; Ospina, P. A gamified peer assessment model for online learning environments in a competitive context. Comput. Hum. Behav. 2016, 64, 247–263. [Google Scholar] [CrossRef]
  71. Ramírez-Donoso, L.; Pérez-Sanagustín, M.; Neyem, A. MyMOOCSpace: Mobile cloud-based system tool to improve collaboration and preparation of group assessments in traditional engineering courses in higher education. Comput. Appl. Eng. Educ. 2018, 26, 1507–1518. [Google Scholar] [CrossRef]
  72. Mihret, D.G.; Abayadeera, N.; Watty, K.; McKay, J. Teaching auditing using cases in an online learning environment: The role of ePortfolio assessment. ACC Educ. 2017, 26, 335–357. [Google Scholar] [CrossRef]
  73. Purkayastha, S.; Surapaneni, A.K.; Maity, P.; Rajapuri, A.S.; Gichoya, J.W. Critical Components of Formative Assessment in Process-Oriented Guided Inquiry Learning for Online Labs. Electron. J. E-Learn. 2019, 17, 79–92. [Google Scholar] [CrossRef] [Green Version]
  74. Link, S.; Mehrzad, M.; Rahimi, M. Impact of automated writing evaluation on teacher feedback, student revision, and writing improvement. Comput. Assist. Lang. Learn. 2020, 1–30. [Google Scholar] [CrossRef]
  75. McVey, M. Preservice Teachers’ Perception of Assessment Strategies in Online Teaching. J. Digit. Learn. Teach. Educ. 2016, 32, 119–127. [Google Scholar] [CrossRef]
  76. Ebadi, S.; Rahimi, M. Mediating EFL learners’ academic writing skills in online dynamic assessment using Google Docs. Comput. Assist. Lang. Learn. 2019, 32, 527–555. [Google Scholar] [CrossRef]
  77. Hardianti, R.D.; Taufiq, M.; Pamelasari, S.D. The development of alternative assessment instrument in web-based scientific communication skill in science education seminar course. J. Pendidik. IPA Indones. 2017, 6. [Google Scholar] [CrossRef] [Green Version]
  78. Johansson, E. The Assessment of Higher-order Thinking Skills in Online EFL Courses: A Quantitative Content Analysis. Engl. Stud. 2020, 19, 224–256. [Google Scholar]
  79. Magal-Royo, T.; Garcia Laborda, J.; Price, S. A New m-Learning Scenario for a Listening Comprehension Assessment Test in Second Language Acquisition [SLA]. J. Univers. Comput. Sci. 2017, 23, 1200–1214. [Google Scholar]
  80. Maguire, P.; Maguire, R.; Kelly, R. Using automatic machine assessment to teach computer programming. Comput. Sci. Educ. 2017, 27, 197–214. [Google Scholar] [CrossRef] [Green Version]
  81. Mustakerov, I.; Borissova, D. A Framework for Development of e-learning System for computer programming: Application in the C programming Language. J. E-Learn. Knowl. Soc. 2017, 13, 2. [Google Scholar]
  82. Hu, Y.; Wu, B.; Gu, X. Learning analysis of K-12 students’ online problem solving: A three-stage assessment approach. Interact. Learn. Environ. 2017, 25, 262–279. [Google Scholar] [CrossRef]
  83. Mayeshiba, M.; Jansen, K.R.; Mihlbauer, L. An Evaluation of Critical Thinking in Competency-Based and Traditional Online Learning Environments. Online Learn. 2018, 22, 77–89. [Google Scholar] [CrossRef] [Green Version]
  84. Thompson, M.M.; Braude, E.J. Evaluation of Knowla: An online assessment and learning tool. J. Educ. Comput. Res. 2016, 54, 483–512. [Google Scholar] [CrossRef]
  85. Roberts, A.M.; LoCasale-Crouch, J.; Hamre, B.K.; Buckrop, J.M. Adapting for Scalability: Automating the Video Assessment of Instructional Learning. Online Learn. 2017, 21, 257–272. [Google Scholar] [CrossRef] [Green Version]
  86. Neumann, M.M.; Worrall, S.; Neumann, D.L. Validation of an expressive and receptive tablet assessment of early literacy. J. Res. Technol. Educ. 2019, 51, 326–341. [Google Scholar] [CrossRef]
  87. Wilson, M.; Scalise, K.; Gochyyev, P. Domain modelling for advanced learning environments: The BEAR Assessment System Software. Educ. Psychol. 2019, 39, 1199–1217. [Google Scholar] [CrossRef]
  88. Birjali, M.; Beni-Hssane, A.; Erritali, M. A novel adaptive e-learning model based on Big Data by using competence-based knowledge and social learner activities. Appl. Soft Comput. 2018, 69, 14–32. [Google Scholar] [CrossRef]
  89. Nikou, S.A.; Economides, A.A. An outdoor mobile-based assessment activity: Measuring students’ motivation and acceptance. Int. J. Interact. Mob. Technol. 2016, 10, 11–17. [Google Scholar] [CrossRef] [Green Version]
  90. Wang, Y.; Fang, H.; Jin, Q.; Ma, J. SSPA: An effective semi-supervised peer assessment method for large scale MOOCs. Interact. Learn. Environ. 2019. [Google Scholar] [CrossRef]
  91. García, A.C.; Gil-Mediavilla, M.; Álvarez, I.; Casares, M.D. Evaluación entre iguales en entornos de educación superior online mediante el taller de Moodle. Estudio de caso. Form. Univ. 2020, 13, 119–126. [Google Scholar] [CrossRef]
  92. Holmes, N. Engaging with assessment: Increasing student engagement through continuous assessment. Act. Learn. High. Educ. 2018, 19, 23–34. [Google Scholar] [CrossRef] [Green Version]
  93. Romero, L.; Gutierrez, M.; Caliusco, M.L. Semantic modeling of portfolio assessment in e-learning environment. Adv. Sci. Technol. Eng. Syst. J. 2017, 2, 149–156. [Google Scholar] [CrossRef] [Green Version]
  94. Nikou, S.A.; Economides, A.A. Mobile-based assessment: Investigating the factors that influence behavioral intention to use. Comput. Educ. 2017, 109, 56–73. [Google Scholar] [CrossRef]
  95. Karay, Y.; Reiss, B.; Schauber, S.K. Progress testing anytime and anywhere–Does a mobile-learning approach enhance the utility of a large-scale formative assessment tool? Med. Teach. 2020, 42, 1154–1162. [Google Scholar] [CrossRef]
  96. Nikou, S.A.; Economides, A.A. Mobile-Based Assessment: Integrating acceptance and motivational factors into a combined model of Self-Determination Theory and Technology Acceptance. Comput. Hum. Behav. 2017, 68, 83–95. [Google Scholar] [CrossRef]
  97. Nikou, S.A.; Economides, A.A. The impact of paper-based, computer-based and mobile-based self-assessment on students’ science motivation and achievement. Comput. Hum. Behav. 2016, 55, 1241–1248. [Google Scholar] [CrossRef]
  98. Altınay, Z. Evaluating peer learning and assessment in online collaborative learning environments. Behav. Inf. Technol. 2017, 36, 312–320. [Google Scholar] [CrossRef]
  99. Chee, K.; Yahaya, N.; Ibrahim, N.; Hasan, M.N. Review of mobile learning trends 2010–2015: A meta-analysis. Educ. Technol. Soc. 2017, 20, 113–126. [Google Scholar]
  100. Jo, J.; Jun, H.; Lim, H. A comparative study on gamification of the flipped classroom in engineering education to enhance the effects of learning. Comput. Appl. Eng. Educ. 2018, 26, 1626–1640. [Google Scholar] [CrossRef]
  101. Arana, A.I.; Gironés, M.V.; Olagaray, M.L. Mejora de los procesos de evaluación mediante analítica visual del aprendizaje. Educ. Knowl. Soc. 2020, 21, 9. [Google Scholar]
  102. Deena, G.; Raja, K.; PK, N.B.; Kannan, K. Developing the Assessment Questions Automatically to Determine the Cognitive Level of the E-Learner Using NLP Techniques. Int. J. Serv. Sci. Manag. Eng. Technol. 2020, 11, 95–110. [Google Scholar] [CrossRef]
  103. Aljohany, D.A.; Salama, R.M.; Saleh, M. ASSA: Adaptive E-Learning Smart Students Assessment Model. Int. J. Adv. Comput. Sci. Appl. 2018, 9, 128–136. [Google Scholar] [CrossRef]
  104. Paiva, R.C.; Ferreira, M.S.; Frade, M.M. Intelligent tutorial system based on personalized system of instruction to teach or remind mathematical concepts. J. Comput. Assist. Learn. 2017, 33, 370–381. [Google Scholar] [CrossRef]
  105. Bendaly Hlaoui, Y.; Hajjej, F.; Jemni Ben Ayed, L. Learning analytics for the development of adapted e-assessment workflow system. Comput. Appl. Eng. Educ. 2016, 24, 951–966. [Google Scholar] [CrossRef]
  106. Almuayqil, S.; Abd El-Ghany, S.A.; Shehab, A. Towards an Ontology-Based Fully Integrated System for Student E-Assessment. J. Appl. Inf. Technol. 2020, 98, 21. [Google Scholar]
  107. Khdour, T. A semantic assessment framework for e-learning systems. Int. J. Knowl. Learn. 2020, 13, 110–122. [Google Scholar] [CrossRef]
  108. Vachharajani, V.; Pareek, J. Effective Structure Matching Algorithm for Automatic Assessment of Use-Case Diagram. Int. J. Distance Educ. Technol. 2020, 18, 31–50. [Google Scholar] [CrossRef]
  109. Daradoumis, T.; Puig, J.M.; Arguedas, M.; Liñan, L.C. Analyzing students’ perceptions to improve the design of an automated assessment tool in online distributed programming. Comput. Educ. 2019, 128, 159–170. [Google Scholar] [CrossRef]
  110. Santhanavijayan, A.; Balasundaram, S.R.; Narayanan, S.H.; Kumar, S.V.; Prasad, V.V. Automatic generation of multiple choice questions for e-assessment. Int. J. Signal. Imaging Syst. Eng. 2017, 10, 54–62. [Google Scholar] [CrossRef]
  111. Striewe, M. An architecture for modular grading and feedback generation for complex exercises. Sci. Comput. Program. 2016, 129, 35–47. [Google Scholar] [CrossRef]
  112. Khlaisang, J.; Koraneekij, P. Open online assessment management system platform and instrument to enhance the information, media, and ICT literacy skills of 21st century learners. Int. J. Emerg. Technol. Learn. 2019, 14, 111–127. [Google Scholar] [CrossRef]
  113. Nissen, J.M.; Jariwala, M.; Close, E.W.; Van Dusen, B. Participation and performance on paper-and computer-based low-stakes assessments. Int. J. Stem Educ. 2018, 5, 21. [Google Scholar] [CrossRef] [PubMed]
  114. Kortemeyer, G. Scalable continual quality control of formative assessment items in an educational digital library: An empirical study. Int. J. Digit. Libr. 2016, 17, 143–155. [Google Scholar] [CrossRef]
  115. Kranenburg, L.; Reerds, S.T.; Cools, M.; Alderson, J.; Muscarella, M.; Grijpink, K.; Quigley, C.; Drop, S.L. Global application of assessment of competencies of Paediatric endocrinology fellows in the Management of Differences of sex development (DSD) using the ESPE e-learning. org portal. Med. Sci. Educ. 2016, 26, 679–689. [Google Scholar] [CrossRef] [Green Version]
  116. Tsai, S.C. Implementing interactive courseware into EFL business writing: Computational assessment and learning satisfaction. Interact. Learn. Environ. 2019, 27, 46–61. [Google Scholar] [CrossRef]
  117. Lowe, T.W.; Mestel, B.D. Using STACK to support student learning at masters level: A case study. Teach. Math. Its Appl. 2020, 39, 61–70. [Google Scholar] [CrossRef]
  118. Massing, T.; Schwinning, N.; Striewe, M.; Hanck, C.; Goedicke, M. E-assessment using variable-content exercises in mathematical statistics. J. Stat. Educ. 2018, 26, 174–189. [Google Scholar] [CrossRef] [Green Version]
  119. Ilahi-Amri, M.; Cheniti-Belcadhi, L.; Braham, R. A Framework for Competence based e-Assessment. IxDA 2017, 32, 189–204. [Google Scholar]
  120. Misut, M.; Misutova, M. Software Solution Improving Productivity and Quality for Big Volume Students’ Group Assessment Process. Int. J. Emerg. Technol. Learn. 2017, 12, 175–190. [Google Scholar] [CrossRef] [Green Version]
  121. Gwynllyw, D.R.; Weir, I.S.; Henderson, K.L. Using DEWIS and R for multi-staged statistics e-Assessments. Teach. Math. Its Appl. 2016, 35, 14–26. [Google Scholar] [CrossRef] [Green Version]
  122. Khalaf, B. Traditional and Inquiry-Based Learning Pedagogy: A Systematic Critical Review. Int. J. Instr. 2018, 11, 545–564. [Google Scholar] [CrossRef]
  123. Magolda, M. Creating Contexts for Learning and Self-Authorship: Constructive-Developmental Pedagogy; Vanderbilt University Press: Nashville, TN, USA, 1999. [Google Scholar]
  124. Breunig, M. Turning experiential education and critical pedagogy theory into praxis. J. Exp. Educ. 2005, 28, 106–122. [Google Scholar] [CrossRef]
  125. Shcherbinin, M.; Vasilievich Kruchinin, S.; Gennadievich Ivanov, A. MOOC and MOOC degrees: New learning paradigm and its specifics. Manag. Appl. Sci. Tech. 2019, 10, 1–14. [Google Scholar]
Figure 1. PRISMA flow diagram [18] for the systematic literature review about information and communication technologies (ICT)-mediated assessment.
Figure 1. PRISMA flow diagram [18] for the systematic literature review about information and communication technologies (ICT)-mediated assessment.
Futureinternet 12 00232 g001
Figure 2. Number of selected research articles per year of publication.
Figure 2. Number of selected research articles per year of publication.
Futureinternet 12 00232 g002
Figure 3. Distribution of retrieved research articles per continent.
Figure 3. Distribution of retrieved research articles per continent.
Futureinternet 12 00232 g003
Figure 4. Type of studies in selected papers.
Figure 4. Type of studies in selected papers.
Futureinternet 12 00232 g004
Figure 5. Actors from the selected studies.
Figure 5. Actors from the selected studies.
Futureinternet 12 00232 g005
Figure 6. Elements considered for ICT-mediated assessment.
Figure 6. Elements considered for ICT-mediated assessment.
Futureinternet 12 00232 g006
Figure 7. Number of documents categorized by field of knowledge.
Figure 7. Number of documents categorized by field of knowledge.
Futureinternet 12 00232 g007
Figure 8. Pedagogical approaches used in ICT-mediated assessment researches.
Figure 8. Pedagogical approaches used in ICT-mediated assessment researches.
Futureinternet 12 00232 g008
Table 1. Search criteria for the systematic review.
Table 1. Search criteria for the systematic review.
CriteriaDescription
1. DatabaseSCOPUS
SAGE Journals
Taylor&Francis
2. Type of publicationResearch journal
3. Year of publicationBetween 2016 to 2020
4. Inclusion criteriaICT-mediated assessment and evaluation
ICT-mediated assessment approaches
Perception studies in online settings
Development and description of tools and platforms for online assessment
Experimental research comparing online and face-to-face assessment
5. Exclusion criteriaEvaluation of online platforms, digital contents, and online learning methodologies not focused on assessment processes
Design of courses, platforms, or digital contents
Assessment approaches not related to online settings
Assessment instruments not related to online settings
No access to the full text
Table 2. Categories and research questions for the systematic review of ICT-mediated assessment.
Table 2. Categories and research questions for the systematic review of ICT-mediated assessment.
Research QuestionCategoryElementsType of Analysis
All questionsYear of publication~Quantitative synthesis
Country~Quantitative synthesis
Type of studyPerception
Experimental
Descriptive
Case study
Technological development
Quantitative synthesis
(1) What are the meanings of ICT-mediated assessment?ActorsStudents
Teachers
Students and teachers
Quantitative synthesis
What is evaluated?Contents
Skills
Outcomes
Quantitative synthesis and meta-analysis
Purpose of the evaluation~Meta-analysis
(2) What kind of knowledge is susceptible to ICT-mediated assessment?Fields of knowledgeForeign language
Social sciences
Sciences
Engineering
Art and humanities
Economic and administrative sciences
Education
Health
Quantitative synthesis
LevelsK-12
Higher education
Quantitative synthesis
(3) What are the assessment possibilities offered by the current ICT platforms?Digital tool, strategies and Platforms~Meta-analysis
Pedagogical approachTraditional
No-traditional
Critical
Quantitative synthesis and meta-analysis
The symbol ~ is used to indicate that there is not element in this category.
Table 3. Key strategies to incorporate ICT-mediated assessment into learning processes.
Table 3. Key strategies to incorporate ICT-mediated assessment into learning processes.
StrategySamples ReferenceDescriptionChallenges
Self-assessment[35,97]
  • Students are involved in their monitoring, evaluating their performance, and building learning plans.
  • Increase learning motivation and contributes to a better understanding.
  • Requires further research to establish validity and impact on learners
Peer-assessment[22,30,34,40,49,60,70,90,98]
  • Students rate their peers
  • Develop reflection skills and encourages responsibility
  • Useful for online, blended, and massive courses
  • Some question their validity and reliability
  • Requires proper instruction to train the students to grade their peers
Mobile assessment[15,37,52,71,89,94,95,96,97,99]
  • Use a mobile device (cellphones and tablets) for assessment
  • Allows periodic evaluations
  • It can be developed anywhere
  • There may be distractions during the evaluation process
Gamification[56,70,100]
  • Use game for problem-solving and skill development
  • Increases student interest and engagement
  • It can be inefficient for students who do not like games
  • Requires specific developments according to the area of knowledge
Augmented Reality[43,50]
  • Emulation of real-world environment for interaction
  • Increases student interest and engagement
  • Requires specific developments according to the area of knowledge
Learning analytics and adaptive assessments[36,55,87,88,101,102,103,104,105]
  • Captures data during the training process to identify students’ strengths, opportunities, and limitations
  • Allows to adapt the training process according to the skills of each student
  • Incorporating data analysis into educational practices
Automated assessment[56,57,59,64,71,106,107,108,109,110,111]
  • Allows the generation, scoring, and automatic feedback of evaluations
  • Requires specific developments according to the area of knowledge
Table 4. Software and platforms that support ICT-mediated assessment.
Table 4. Software and platforms that support ICT-mediated assessment.
Software and PlatformsReferenceDescription
SOCRATIVE[44]Web-based platform for quizzes
EduZinc[55]Application to customized assessment activities according to the needs and skills of students
ICT-FLAG[56]Formative assessment tool including learning analytics and gamification services
CC-LR prototype[58]Collaborative complex learning resource personalized awareness and feedback system using learning analytics
FAMLE[68]Formative assessment multimedia learning environment based on assessment tasks that measure performance, learning, and knowledge and display learning data to students and teachers
MeuTutor[70]Intelligent tutoring system that allows monitoring the formative process
MyMOOCSpace[71]Cloud-based mobile system for collaborative learning process
KNOWLA[84]Knowledge assembly web-based interface allows creating and grading assessments. Students should assemble a set of scrambled fragments into a logical order
BASS 1.0[87]Web-based system to design, development and delivery assessment and feedback
TCU-AMS[112]Open online assessment management system compatible with Open edX platform supporting traditional self-and peer- assessment.
DSLab[109]Web-based system with automatic assessment, feedback, interactive comparison between student solution and the correct solution
COBLE[101]Competence-based learning environment allowing visual information about assessment to students
LASSO[113]Learning About STEM Student Outcomes web-based platforms with assessment instruments in several disciplines
LON-CAPA[114]Open-source platform allowing to create and develop assessment
ESPE-eLearning[115]European Society for Paediatric Endocrinology e-learning portal for medical training
English for International Trade and Business[116]Multimedia platform for Chinese EFL students including self-checking and feedback system
Adobe Connect[98]Software for online training and web conferencing
STACK[117]Open-source for randomization of questions, integrated into Moodle, with automatic feedback. Oriented to mathematical and algebraic questions.
JACK[118]Computer assisted assessment platform, originally created for programing course. Currently, supporting several fields.
SWAP-COMP[119]Platform to support competence-based learning.
SANCHO[120]Client server application to support automatic evaluation of text
Cloud-AWAS[105]Cloud Adapted Workflow e-Assessment System supports learning analytics, can be integrated into any LMS
DEWIS[121]e-assessment system integrating embedded R code. Support statistical analysis assessment.
Table 5. Evaluation in contemporary pedagogy.
Table 5. Evaluation in contemporary pedagogy.
Pedagogical ApproachesEmphasisKnowledge Structure—CurriculumRelationship Teacher-StudentsEvaluation and Assessment
Traditional [122]
  • Teacher
  • Verbally transmitted from repetition
  • Training
  • Encyclopedist
  • The authority is the teacher
  • Summation
  • From student products
Nontraditional: Experiential–New School-Cognitivist–Developmental-Constructivist [123]
  • The student is the protagonist of his learning
  • Intellectual development from progress and sequence according to psychological analysis
  • Learning process
  • Development of skills from previous knowledge
  • Problem-solving
  • Student autonomy
  • Collaborative work
  • The teacher is a facilitator or mediator
  • Teacher identifying student needs
  • Stimulates critical, creative, and reflective thinking
  • Formative and summative evaluation
  • Assesses process and results
  • Permanent feedback
  • Improves learning
Critical [124]
  • Social emancipation from context recognition
  • Process of knowledge production from awareness and dialogue
  • Horizontal relationship based on autonomy and responsibility from self-reflection
  • Non-neutral evaluation for the improvement of the teaching–learning process
Table 6. LMS evaluation tools: type of questions and general configuration supported by Moodle, DOKEOS, Caroline, SAKAI, Microsoft Teams, and Google Classroom.
Table 6. LMS evaluation tools: type of questions and general configuration supported by Moodle, DOKEOS, Caroline, SAKAI, Microsoft Teams, and Google Classroom.
LMS PlatformType of QuestionsConfigurations
Moodle 1
  • Calculated
  • Essay
  • False/true
  • Numerical
  • Multiple choice
  • Calculated multiple choice
  • Matching question
  • Short answer
  • Embedded response
  • Calculated simple
  • Random order of questions
  • Questions conditioned by other questions
  • Question Bank
DOKEOS 2
  • 29 types of questions
  • Multiple-choice questions
  • A questionnaire with multiple answers
  • Embedded response
  • Open question
  • Matching question
  • Detection of zones
  • Delineation
  • Random order of questions
Caroline 3
  • Multiple choice question
  • Embedded response
  • Graphics question
  • Table question
  • Question association
  • Matching question
  • Open Question
  • Adding media
  • Random order of questions
  • Questionnaire generation by thematic or level of difficulty
  • Question Bank
SAKAI 4
  • Multiple choice question
  • Matching question
  • False/true
  • Short answer/essay
  • Fill in the blank
  • Numeric response
  • Calculated
  • Hot spot question
  • Survey
  • Audio response question
  • File upload
  • Random order of questions
Microsoft Teams 5
  • Multiple choice question
  • Text
  • Random order of questions
Google Classroom 6
  • Short answer
  • Paragraph
  • Multiple choice
  • Checkboxes
  • Dropdown
  • File upload
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Torres-Madroñero, E.M.; Torres-Madroñero, M.C.; Ruiz Botero, L.D. Challenges and Possibilities of ICT-Mediated Assessment in Virtual Teaching and Learning Processes. Future Internet 2020, 12, 232. https://doi.org/10.3390/fi12120232

AMA Style

Torres-Madroñero EM, Torres-Madroñero MC, Ruiz Botero LD. Challenges and Possibilities of ICT-Mediated Assessment in Virtual Teaching and Learning Processes. Future Internet. 2020; 12(12):232. https://doi.org/10.3390/fi12120232

Chicago/Turabian Style

Torres-Madroñero, Esperanza Milena, Maria C. Torres-Madroñero, and Luz Dary Ruiz Botero. 2020. "Challenges and Possibilities of ICT-Mediated Assessment in Virtual Teaching and Learning Processes" Future Internet 12, no. 12: 232. https://doi.org/10.3390/fi12120232

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop