Application of New Technologies for Assessment in Higher Education

A special issue of Education Sciences (ISSN 2227-7102). This special issue belongs to the section "Technology Enhanced Education".

Deadline for manuscript submissions: 31 July 2024 | Viewed by 8298

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK
Interests: educational technology; computer science education; higher education; mobile learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Education, University of Hull, Hull HU6 7RX, UK
Interests: technology-enhanced learning; blended learning; e-assessment; education future scenarios

Special Issue Information

Dear Colleagues,

The landscape in which HE institutions operate has been changing, with increasing numbers of students attending universities with diverse missions. As a result, traditional models of course delivery are being supplemented or replaced by flexible approaches assisted by new technologies, a process which has been accelerated by the recent pandemic. Existing generic tools such as learning management systems and virtual learning environments are complemented by software which fulfils particular educational tasks, including learning analytics software, which supports the assessment process. The development of assessment tools for use within HE institutions gives rise to challenges, technical, pedagogic, and administrative, to the process of assessment.

The goal of this Special Issue is to consider the implications of applying new technological approaches to the assessment process. The issue will relate to much of the recent literature on the educational impacts of the pandemic, to the potential of emerging technologies for assessment, and to the institutional challenges of universities operating in an increasingly global market. We welcome the submission of research papers which focus on technical, pedagogical, or administrative views of that process. Possible topics include but are not limited to the following:

  • Evaluation of the effectiveness of particular approaches;
  • Incorporation of new computer science techniques, such as learning analytics, artificial intelligence, and machine learning;
  • Student perceptions of the assessment process;
  • Implications for pedagogic theory;
  • Incorporation of assessment technologies into existing learning environments.

Prof. Dr. Mike Joy
Dr. Peter Williams
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a double-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Education Sciences is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • assessment
  • HE
  • higher education
  • university
  • technology
  • learning analytics

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 789 KiB  
Article
Assessment Automation of Complex Student Programming Assignments
by Matija Novak and Dragutin Kermek
Educ. Sci. 2024, 14(1), 54; https://doi.org/10.3390/educsci14010054 - 01 Jan 2024
Viewed by 1259
Abstract
Grading student programming assignments is not an easy task. This task is even more challenging when talking about complex programming assignments at university graduate level. By complex assignments, we mean assignments where students have to program a complete application from scratch. For example, [...] Read more.
Grading student programming assignments is not an easy task. This task is even more challenging when talking about complex programming assignments at university graduate level. By complex assignments, we mean assignments where students have to program a complete application from scratch. For example, building a complete web application with a client and server side, whereby the application uses multiple threads that gather data from some external service (like the REST service, IoT sensors, etc.), processes these data and store them in some storage (e.g., a database), implements a custom protocol over a socket or something similar, implements their own REST/SOAP/GraphQL service, then sends or receives JMS/MQTT/WebSocket messages, etc. Such assignments give students an inside view of building real Internet applications. On the other hand, assignments like these take a long time to be tested and graded manually, e.g., up to 1 h per student. To speed up the assessment process, there are different automation possibilities that can check for the correctness of some application parts without endangering the grading quality. In this study, different possibilities of automation are described that have been improved over several years. This process takes advantage of unit testing, bash scripting, and other methods. The main goal of this study is to define an assessment process that can be used to grade complex programming assignments, with concrete examples of what and how to automate. This process involves assignment preparation for automation, plagiarism (i.e., better said similarity) detection, performing an automatic check of the correctness of each programming assignment, conducting an analysis of the obtained data, the awarding of points (grading) for each programming assignment, and other such activities. We also discuss what the downsides of automation are and why it is not possible to completely automate the grading process. Full article
(This article belongs to the Special Issue Application of New Technologies for Assessment in Higher Education)
Show Figures

Figure 1

24 pages, 1418 KiB  
Article
Digital Assessment: A Survey of Romanian Higher Education Teachers’ Practices and Needs
by Gabriela Grosseck, Ramona Alice Bran and Laurențiu Gabriel Țîru
Educ. Sci. 2024, 14(1), 32; https://doi.org/10.3390/educsci14010032 - 27 Dec 2023
Viewed by 1229
Abstract
Within the European Commission’s Digital Education Action Plan (2021–2027) and the DigCompEdu framework, our research focuses on the competence area of teachers’ assessment practices and needs. We designed a 24-item online questionnaire for Romanian higher education teachers who are using digital technologies for [...] Read more.
Within the European Commission’s Digital Education Action Plan (2021–2027) and the DigCompEdu framework, our research focuses on the competence area of teachers’ assessment practices and needs. We designed a 24-item online questionnaire for Romanian higher education teachers who are using digital technologies for assessing students’ learning, learning outcomes and practical skills. The present paper analyzes how the 60 respondents from Romanian universities evaluate their own digital competence and how they are using digital assessment, but also what training needs they have in these regards. This study, carried out in May–June 2022, therefore attempts to identify the main concerns, challenges and obstacles higher education teachers encounter when designing and using digital assessment. Our findings indicate the importance of empowering teachers through continuous learning, embracing flexible hybrid models and reimagining assessment strategies for digital literacy. The ANOVA analysis reveals variations among three groups categorized by self-reported digital competencies in their utilization of digital tools. Responsible knowledge-sharing, AI literacy and adaptive curriculum design emerged as critical imperatives. Our study advocates for a transformative shift towards AI-based pedagogy, emphasizing personalized learning that aligns with teachers’ competencies and specific assessment needs while adhering to fundamental teaching principles. Full article
(This article belongs to the Special Issue Application of New Technologies for Assessment in Higher Education)
Show Figures

Figure 1

24 pages, 641 KiB  
Article
AI, Analytics and a New Assessment Model for Universities
by Peter Williams
Educ. Sci. 2023, 13(10), 1040; https://doi.org/10.3390/educsci13101040 - 17 Oct 2023
Cited by 1 | Viewed by 2084
Abstract
As the COVID-19 pandemic recedes, its legacy has been to disrupt universities across the world, most immediately in developing online adjuncts to face-to-face teaching. Behind these problems lie those of assessment, particularly traditional summative assessment, which has proved more difficult to implement. This [...] Read more.
As the COVID-19 pandemic recedes, its legacy has been to disrupt universities across the world, most immediately in developing online adjuncts to face-to-face teaching. Behind these problems lie those of assessment, particularly traditional summative assessment, which has proved more difficult to implement. This paper models the current practice of assessment in higher education as influenced by ten factors, the most important of which are the emerging technologies of artificial intelligence (AI) and learning analytics (LA). Using this model and a SWOT analysis, the paper argues that the pressures of marketisation and demand for nontraditional and vocationally oriented provision put a premium on courses offering a more flexible and student-centred assessment. This could be facilitated through institutional strategies enabling assessment for learning: an approach that employs formative assessment supported by AI and LA, together with collaborative working in realistic contexts, to facilitate students’ development as flexible and sustainable learners. While literature in this area tends to focus on one or two aspects of technology or assessment, this paper aims to be integrative by drawing upon more comprehensive evidence to support its thesis. Full article
(This article belongs to the Special Issue Application of New Technologies for Assessment in Higher Education)
Show Figures

Figure 1

13 pages, 600 KiB  
Article
Microlearning for the Development of Teachers’ Digital Competence Related to Feedback and Decision Making
by Viviana Betancur-Chicué and Ana García-Valcárcel Muñoz-Repiso
Educ. Sci. 2023, 13(7), 722; https://doi.org/10.3390/educsci13070722 - 15 Jul 2023
Viewed by 1537
Abstract
The assessment and feedback area of the European Framework for the Digital Competence of Educators (DigCompEdu) establishes a specific competence related to the ability to use digital technologies to provide feedback and make decisions for learning. According to the literature, this particular competence [...] Read more.
The assessment and feedback area of the European Framework for the Digital Competence of Educators (DigCompEdu) establishes a specific competence related to the ability to use digital technologies to provide feedback and make decisions for learning. According to the literature, this particular competence is one of the least developed in the teaching profession. As there are few specialised training strategies in the field of information and communication technology (ICT)-mediated feedback, this study aims to validate a microlearning proposal for university teachers, organised in levels of progression following the DigCompEdu guidelines. To validate the proposal, a literature analysis was carried out and a training proposal was developed and submitted to a peer review process to assess its relevance. This study identifies the elements that should be included in a training strategy in the area of feedback and decision making for university contexts. Finally, it is concluded that this type of training requires a combination of agile and self-managed strategies (characteristics of microlearning), which can be complemented by the presentation of evidence and collaborative work with colleagues. Full article
(This article belongs to the Special Issue Application of New Technologies for Assessment in Higher Education)
Show Figures

Figure 1

23 pages, 1214 KiB  
Article
Maintaining Academic Integrity in Programming: Locality-Sensitive Hashing and Recommendations
by Oscar Karnalim
Educ. Sci. 2023, 13(1), 54; https://doi.org/10.3390/educsci13010054 - 03 Jan 2023
Cited by 2 | Viewed by 1232
Abstract
Not many efficient similarity detectors are employed in practice to maintain academic integrity. Perhaps it is because they lack intuitive reports for investigation, they only have a command line interface, and/or they are not publicly accessible. This paper presents SSTRANGE, an efficient similarity [...] Read more.
Not many efficient similarity detectors are employed in practice to maintain academic integrity. Perhaps it is because they lack intuitive reports for investigation, they only have a command line interface, and/or they are not publicly accessible. This paper presents SSTRANGE, an efficient similarity detector with locality-sensitive hashing (MinHash and Super-Bit). The tool features intuitive reports for investigation and a graphical user interface. Further, it is accessible on GitHub. SSTRANGE was evaluated on the SOCO dataset under two performance metrics: f-score and processing time. The evaluation shows that both MinHash and Super-Bit are more efficient than their predecessors (Cosine and Jaccard with 60% less processing time) and a common similarity measurement (running Karp-Rabin greedy string tiling with 99% less processing time). Further, the effectiveness trade-off is still reasonable (no more than 24%). Higher effectiveness can be obtained by tuning the number of clusters and stages. To encourage the use of automated similarity detectors, we provide ten recommendations for instructors interested in employing such detectors for the first time. These include consideration of assessment design, irregular patterns of similarity, multiple similarity measurements, and effectiveness–efficiency trade-off. The recommendations are based on our 2.5-year experience employing similarity detectors (SSTRANGE’s predecessors) in 13 course offerings with various assessment designs. Full article
(This article belongs to the Special Issue Application of New Technologies for Assessment in Higher Education)
Show Figures

Figure 1

Back to TopTop