Next Article in Journal
Do Environmental Taxes Affect Carbon Dioxide Emissions in OECD Countries? Evidence from the Dynamic Panel Threshold Model
Previous Article in Journal
Credit Card Fraud Detection Based on Unsupervised Attentional Anomaly Detection Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Data-Driven Decision-Making (DDDM) for Higher Education Assessments: A Case Study

by
Samuel Kaspi
and
Sitalakshmi Venkatraman
*
Department of Information Technology, Melbourne Polytechnic, 77 St Georges Rd, Preston, VIC 3072, Australia
*
Author to whom correspondence should be addressed.
Systems 2023, 11(6), 306; https://doi.org/10.3390/systems11060306
Submission received: 23 May 2023 / Revised: 8 June 2023 / Accepted: 9 June 2023 / Published: 13 June 2023
(This article belongs to the Topic Data-Driven Group Decision-Making)

Abstract

:
The higher education (HE) system is witnessing immense transformations to keep pace with the rapid advancements in digital technologies and due to the recent COVID-19 pandemic compelling educational institutions to completely switch to online teaching and assessments. Assessments are considered to play an important and powerful role in students’ educational experience and evaluation of their academic abilities. However, there are many stigmas associated with both “traditional” and alternative assessment methods. Rethinking assessments is increasingly happening worldwide to keep up with the shift in current teaching and learning paradigms due to new possibilities of using digital technologies and a continuous improvement of student engagement. Many educational decisions such as a change in assessment from traditional summative exams to alternate methods require appropriate rationale and justification. In this paper, we adopt data-driven decision-making (DDDM) as a process for rethinking assessment methods and implementing assessment transformations innovatively in an HE environment. We make use of student performance data to make an informed decision for moving from exam-based assessments to nonexam assessment methods. We demonstrate the application of the DDDM approach for an educational institute by analyzing the impact of transforming the assessments of 13 out of 27 subjects offered in a Bachelor of Information Technology (BIT) program as a case study. A comparison of data analysis performed before, during, and after the COVID-19 pandemic using different student learning measures such as failure rates and mean marks provides meaningful insights into the impact of assessment transformations. Our implementation of the DDDM model along with examining the influencing factors of student learning through assessment transformations in an HE environment is the first of its kind. With many HE providers facing several challenges due to the adoption of blended learning, this pilot study based on a DDDM approach encourages innovation in classroom teaching and assessment redesign. In addition, it opens further research in implementing such evidence-based practices for future classroom innovations and assessment transformations towards achieving higher levels of educational quality.

1. Introduction

In higher education (HE), there are two main assessment categories, namely, summative assessment and formative assessment. A summative assessment happens at the endpoint of learning as implied by the term “summative” [1,2]. In contrast to summative assessments, formative assessments are devised to monitor student learning to provide ongoing feedback that can be useful for lecturers to improve their teaching as well as students to improve their learning. Normally, summative assessments are given a high assessment weight in terms of the total marks, whilst formative assessments are of low assessment weights. Some examples of summative assessments are class tests, semester exams, and standardized tests or exams. It is a common practice to administer such a summative exam for students to demonstrate their learning achieved at the end of a semester. Traditionally, for several decades in the past, students were required to take an exam independently without any resources and with a time-bound duration. An exam could be in multiple forms with question types such as multiple-choice, short answer, essay, etc. There are several advantages of having exams such as those as it is easy to assess a student’s individual learning outcomes and to enforce academic integrity. Further, the HE lecturers are under pressure, as it takes more time and effort to prepare other alternate assessments with detailed plans that can meet the standards as well as guard against academic dishonesty [3,4,5]. However, more recently, there has been a shift towards adopting nonexam assessment methods. There are several compelling reasons for this trend. This paper explores the ramifications in this topic and demonstrates the use of a data-driven decision-making model for an educational institute offering HE programs, as a case study. The institute aims at shifting certain subjects of a Bachelor of Information Technology (BIT) program to have nonexam mode of assessments. Lecturers often take an intuitive approach in designing assessments guided by pedagogical principles on one hand and teaching principles on the other. However, intuition-driven decision-making comes from one’s experience and has inherent problems such as bias, inaccuracy, and overconfidence [6]. A data-driven approach to decision-making has the advantage of being evidence-based. In the context of choosing the appropriate type and the design of assessments, there is a lack of research on the adoption of data-driven decision-making (DDDM), within a blended learning environment, especially with a sudden requirement of remote teaching during the COVID-19 pandemic. This paper tries to address the gap in the literature by adopting a DDDM approach for assessment transformations in the BIT program offered in an educational institute as a case study.
There are limitations in both summative and formative assessments, and creating high-quality educational assessments is important to meet the required HE standards. The success comes from the science of assessment design and grading procedures as well as the art of creatively engaging students, in particular with formative assessments [7]. Such concepts are becoming more prevalent in HE as education providers embrace the learning outcomes to include not only cognitive (such as knowledge and skills) but also behavioral (actual student behaviors in the classroom) and affective (such as student values, attitudes, and motivation) as categorized in the taxonomy of Bloom and Krathwohl [8,9]. While a combination of knowledge and skill assessments can be used to examine cognitive outcomes, a variety of tasks to assess behavioral and affective outcomes would contribute for a better understanding of student learning and the required learning support.
The design of the assessment tasks throughout an HE course is expected to cover the range of difficulty and complexity supporting the different levels of learner motivation. Curriculum designers and lecturers have the task of aligning learning outcomes and the different assessments towards how learners could be actively engaged in their learning with an aim to determine what they wish to learn as well as what they actually learn. Therefore, making decisions on the formative and summative assessment design and delivery could be biased with the emphasis given among the multiple perspectives that are dynamically changing from time to time. With the onset of digital technologies playing a major role in blended course delivery and online student learning, a justified decision-making using data is being researched in practice. In addition, extraordinary situations such as the COVID-19 pandemic resulted in course delivery and student assessments being completely online. The recent high-stakes environment has compelled many lecturers and the HE providers to emphasize some elements of the learning outcomes at the expense of others, and these decisions have had an impact on students’ learning experiences. While the current dynamic environment has motivated many HE institutions to rethink their assessments, we recommend a systematic use of data for decision-making. In this paper, we describe the DDDM model adopted at an educational institute as a case study in making an informed and justified decision for shifting towards a nonexam mode of assessments for one of its HE programs.
The rest of the paper Is organized as follows. Section 2 presents an overview of DDDM and related studies in the literature. In Section 3, we provide a rationale for transforming traditional exam-based to nonexam assessments using the DDDM approach as a case study. We study the impact of the transformation, and outcomes are presented in Section 4. We discuss the data analysis performed and results obtained in the study with future recommendations. Finally, Section 5 provides concluding remarks.

2. Data-Driven Decision-Making (DDDM) Approach in Education

Data-driven decision-making (DDDM) is a process for deciding on a course of action using data or facts and not by merely using observation, intuition, or any other form of subjectivity that could be biased. The adoption of DDDM is growing in the education sector for making informed improvements when educators use student data to influence curriculum decisions, strategies, and policies [10]. With digital technologies becoming more accessible, affordable, and available anywhere and anytime, it has become easier to use data to inform decision-making in teaching and assessment practices. With the DDDM approach, lecturers could focus on formative assessments evaluating their daily ongoing learning as well as the overall achievement that relates to a summative assessment connecting what is known to the students and what we want them to learn. Using DDDM, a shift in traditionally accepted practices could be made to improve teaching and student learning outcomes.
For more than a decade, DDDM principles have been adopted to examine and evaluate teaching practices in the classroom and academic interventions to improve student learning experiences and achievements. However, using data to inform teaching practices is a multifaceted one. The term data in DDDM refers to information that can be broadly classified as either quantitative or qualitative. This could include information captured intentionally and analyzed at either the individual or group level, such as informal observations of students’ classroom behavior, inventories of student learning activities, formal norm-referenced assessments, self-appraisals of students/lecturers, automatic generation of data from an online learning management system, etc. From the literature, we observe that data can be used in multiple ways based on the range of practitioner perspectives that results in the complexity of DDDM for different educational situations. While data could provide more insights into classroom instruction, there are multiple perspectives of the philosophical view of reality, knowledge, and learning [11]. Further, there are philosophical differences in the purpose and methods of data collection for the intended stakeholders as well as in their levels of involvement. Some studies have been conducted on a large scale, collecting data from a school, district, or even state to inform classroom practice, assessment paradigms, and student performance. Multiple perspectives surrounding the concept of assessment and its purpose exist. Data could be captured by not only academic staff but also administrators on various attributes related to lecturers, classroom activities, assessments, etc. Any data gathered, whether in numerical format or not, have the potential to influence classroom delivery and ultimately, student learning and achievement. With a new era compelling educators to collect, organize, and analyze data, using the information effectively for instructional and curriculum improvement requires a framework as a guidance.
Some studies offer models to assist in the conceptualization of the data gathered for using it more effectively [12,13]. Such studies have used three kinds of data, namely, outcome data, demographic data, and process data, and have suggested that a combined use of all three would result in instruction to be systematic, targeted, and purposeful with high levels of student learning. Student learning assessments such as classroom tests, exams, and even surveys could be analyzed to get insights about individual and group performance. These, combined with demographic data such as students’ age, gender, race/ethnicity, social class/socioeconomic, status would provide a broader perspective for an effective decision-making. In addition, process data are related to the curriculum organization, teaching strategies and other classroom management of the educational program. Process data, when analyzed in combination with outcome and demographic data, can result in changes in the behavior of lecturers and administrators. Along similar lines, the Data Analysis Framework for Instructional Decision-Making proposed by Mokhtari et al. [12] consists of three categories: professional development data, classroom assessment data, and reading performance data. Both these models help in reflecting that data from the three categories, when combined, lead to effective DDDM. However, their data categories do not differentiate between summative and formative assessments though the conceptual perspectives of different types of assessment vary in purpose and role. Data from both formative and summative assessments are used to determine the proficiency of a student’s learning achievement or performance. These are judged in relation to certain pre-established criteria or in relative comparison to a peer group that serves as the basis for assigning student grades. While data derived from formative assessments could be useful to report on students’ learning progress with a focus on their immediate learning needs, data from summative assessments may inform the overall achievement of the learning target or learning objective for the current/future cohort of students and the decisions have high-stakes consequences.
Educators are faced with dilemmas in choosing between summative and formative assessments as each of these two categories has a distinct purpose and role. The concept of a balanced assessment between ongoing informal classroom assessments and more formal standardized measures is gaining popularity in the literature [14,15]. However, there is an increasing trend towards promoting formative assessment and some even claim that formative assessment that encourages student involvement and active learning is more valuable than summative assessment. With the fourth industrial revolution happening at present, the reskilling requirements of the workforce lead to the Education 4.0 revolution with lifelong learning goals. To foster Education 4.0, educators are required to rethink assessments and to keep up with current pedagogical, cultural, and technological developments that impact teaching and learning.
With the recent advancements in digital technologies, a plethora of new possibilities have emerged for facilitating assessments to engage students with lifelong experiences. However, the use of digital technologies for assessments are more about replacing traditional methods and existing practices rather than being transformative. Further, the ethical issues of digital technologies in assessment and other factors affecting successful educational change require closer scrutiny, and a DDDM approach could assist in rethinking assessments. However, a recent study [16] explored the misconceptions around data-based decision-making in education among researchers and practitioners. Based on a survey of the literature, it was identified that the effectiveness of DDDM can be derived based on having reliable data, measurable goals, the ability to transform data analysis towards improving educational interventions with a focus on professional learning and development. The study aimed to gain further insight into teachers’ decision-making. Such related studies could provide a stimulus for implementing justified changes into educational practice as well as a roadmap for a future research agenda on effective use of data for achieving educational quality and innovation [17,18].
In this study, a justified rethinking of assessments is recommended using a DDDM approach. Improvement of students’ progress and learning outcomes can be demonstrated through assessments. Data collection and analysis of student performance over time and other educational data could facilitate the justification of informed and valid decisions. This paper provides a description of the use of a DDDM approach for transforming traditional assessments in an HE setting as a case study.

3. Rationale for Transforming Traditional Assessments Using DDDM: A Case Study

An assessment that is administered as a final, written examination (called exam) is generally classified as a summative assessment, as it is used to determine whether students have learned what they were expected to learn. By contrast, formative assessments are used to monitor student learning to provide ongoing feedback that can be used by lecturers to improve their teaching and by students to improve their learning. In HE, summative assessments are usually highly valued in terms of marks or assessment weighting, whilst formative assessments are of lower value.
As a general guideline, an assessment is designed to fall under one of the two categories exclusively: summative or formative type of assessment. In this study, we consider an educational institute that offers a Bachelor of Information Technology (BIT) program in Australia, as a case study with a slightly different assessment practice. Except for exams conducted at the end of a semester and class tests during the semester, the majority of the other assessments for the BIT program are designed to have a mix of both summative and ongoing evaluation aspects rather than exclusive to one of the assessment categories alone. The BIT program at that institute offers 31 subjects. Prior to semester 2, 2020, 4 of those subjects did not have exams as part of their assessment whilst 27 subjects had exams. For those subjects with the final assessment as exams, it accounted for 40% to 50% of the assessment weighting depending on the subject’s requirements for determining a summative evaluation of each student learning. Indeed, no subject had exams that accounted for more than 50% of the final mark. However, in semester 1, 2020, due to the COVID-19 lockdown restrictions, a rethinking of assessments was mandatory due to the restricted conditions of online, remote subject offerings that impacted teaching delivery, student learning, as well as assessments. As a result, a DDDM approach was adopted to identify major assessments of those subjects that required changes in subjects’ delivery and assessment methods. Transformations from exam-based to nonexam assessments were proposed for 13 subjects that were identified based on both qualitative and quantitative feedback of each subject delivery. The change for these subjects from exam-based to nonexam assessments was accepted by the appropriate governance committees of the institute and the accreditation bodies. Overall, there were 4 subjects that were always offered with no exams, 13 subjects that were changed from exam to nonexam assessments, and 14 subjects maintained the use of a summative exam at the end of a semester. This paper examines the impact of these changes and how a DDDM approach could help in rethinking the exam-based assessments and remodeling them into other forms of assessments as a continuous improvement strategy of educational quality.
While the subjects with exams typically had 40% to 50% of marks allocated for a final semester exam, a majority of the other marks were generally based on two or three reports or projects of high value. In a thirteen-week semester, typically, one report would be due in week 8, and the other in week 12. It is a common practice to design the reports/projects in week 8 and 12 to be related to each other. For example, in a subject called Systems Analysis and Design, the week 8 project submission is an analysis document on a relevant case scenario, and the week 12 project submission would be a design document on the same case scenario. Further, for exam-based subjects, revision classes are held in week 13 followed by a week’s study break and then, the exams are conducted. Low-value assessments are rare as the institute policy is to minimize low-value assessments in higher education.
In recent years, literature studies have concurred with academic teaching experiences worldwide that HE students are reluctant to undertake assessments that do not have a substantial value contributing to their overall course grades. In particular, the student circumstances of the educational institute in this case study are quite unique since the majority of the students (over 90%) are international and have to work to pay their fees and support themselves. Further, the domestic students at the institute do not have government funding for paying their course fees and are predominantly coming from the lower socioeconomic spectrum. Hence, they too have to work to support themselves and pay their fees. Given these circumstances, students are reluctant to undertake assessments unless they carry a substantial value towards the overall assessment component of the subject grade.
For each subject offered to the HE students in the institute, it is desirable to follow the contemporary learning theory and practice by assessing the cognitive, behavioral, and affective or emotional dimensions of learning outcomes [19]. These three dimensions are considered as the three constructs of student engagement in learning, which give an assessment of: (i) students’ academic and multitasking capabilities (behavioral) inside or outside classrooms, (ii) students’ attitudes, interests, and values (affective or emotional) in their learning experience, and (iii) students’ self-directed learning skills or motivational goals (cognitive) achieved [20,21]. Traditionally, researchers have employed different variables to assess student engagement in their learning process for determining their academic achievement [22,23] by including social, communication, and participative aspects. However, the most commonly discussed variables are cognitive, affective, and behavioral engagements [24,25]. In medical education, it is more desirable to assess the cognitive, psychomotor, and communication skills of graduating students while also assessing their professionalism attributes for achieving effectiveness in their field of practice.
Due to the abovementioned multifaceted perspectives involved with the assessment of student learning, it is important to design them well. In particular, an assessment drives student learning and forms the fundamental aspect of instruction. Hence, at the Ottawa Conference in 2010, a consensus framework for a good assessment of student achievement was developed [26]. Though desirable, a single assessment alone may not be able to assess all the variables of student engagement. Hence, a combination of assessments that fosters the active involvement of students are required to be designed for the benefit of all stakeholders, students, lecturers, employers, etc. [27]. The purpose of an assessment influences the importance of each variable of student engagement with varying levels, and the weightings associated may differ based on the lecturer’s perspective of the impact on student learning. Therefore, the key challenge in designing an assessment is to use appropriate assessment tasks that will lead towards evaluating each student’s learning and performance [25]. Further, students’ perceptions of the purposes of assessment, cues about what and how to learn, the assessment format as well as its relationship with what is being assessed are some of the main factors that impact on learning. It is equally important to also consider the lecturers’ perceptions as well as direct measures of students’ learning, such as their exam scores and assessment rubrics, to ensure an accurate evaluation of their learning outcomes.
With a recent increasing trend of distance-learning education that students could undertake via a massive open online course (MOOC), several research reviews with a focus on MOOC assessments have revealed the requirement for further investigation on the assessment practices of instructional designers and course instructors for an appropriate course design that can relate assessments to learning outcomes [28,29,30]. Such studies have also examined factors of Biggs’ 3P model that influence student learning outcomes such as engagement and achievement. There has been an increasing emphasis on individual student characteristic to be considered in assessment design along with rubrics and other assessment instruments, including peer and self-assessment. In addition, research findings in teaching and learning suggest that the complexity of assessment methods is limited, with a lack of diversity relating to learning outcomes [31]. Hence, several challenges and affordances exist in assessing student learning in online environments.
It is well-established that in HE, there is a need for assessments that are both formative and summative. In the BIT program considered in this study, the reports and projects used as assessments serve this purpose in subjects that have a summative exam as well as in subjects that have only nonexam assessments as formative types. In both exam-based and non-exam-based subjects, typically, in week 3 or 4, students are given the specifications for a report/project due in week 8. This way, students are given sufficient time to consult with their lecturer who gives them guidance and at the same time monitors their progress. Each student’s assessment submission is marked with individual feedback given by the lecturers. Assessments due in week 12 follow a similar process. These assessments have been successful in both guiding student learning and evaluating the degree to which the student is able to meet the assessment’s learning objectives. However, their success does not answer the question of why there is a need to replace exams with such formative nonexam assessments. With invigilated exams considered as more secure, the tendency is to continue with an assessment framework consisting of both reports/projects during the semester and an exam at the end of semester. Comparing with traditional paper-based exams, online exams are perceived to offer more cheating opportunities for student to use unpermitted resources as well as collaboration during the exam [32,33,34,35]. Some studies report that cheating such as the use of unauthorized materials, identity falsification, and plagiarism has always occurred regardless of whether the assessment is online or not [36,37]. Further, recent remote online delivery and MOOC during the COVID-19 pandemic have resulted in novel assessment experiences for many universities [4,38]. Hence, the educational institute under study was also motivated to rethink the assessment methods that were used in several subjects in the BIT program. We adopted a DDDM approach for making the necessary transformations in assessments from exam-based ones to nonexam assessments.
The implementation of the DDDM approach for assessment transformation in the BIT program required subject lecturers and course administrators to possess basic data-literacy and common skillsets such as: (i) assessment literacy, (ii) reliable student data gathering, (iii) technology literacy to handle data analytics and insights, and (iv) effective data use for assessing student learning. Relevant data included assessment outcomes, students’ background information, and qualitative feedback from classroom observations. We adapted the four recommendations from [39] for preparing the subject lecturers and course administrators on data literacy as follows: (1) provide data-skill-focused training, (2) establish collaboration between subject lecturers and moderators/subject experts, (3) model and use quantitative and qualitative data from various sources, and (4) explore the role of technology and big data on data literacy. Several workshops, meetings and focus-group discussions were undertaken to improve data literacy for transforming assessments from exam-based to nonexam assessments. The next section provides the data analysis and results of this study followed by a discussion on the student learning achieved due to assessment transformations and other influencing factors that had an impact on the results.

4. Data Analysis, Results, and Discussion

Rethinking assessments presents many challenges, particularly with online delivery and blended learning environments. One of the recent challenges has been to consider various ways of safeguarding academic integrity while redesigning assessments [38,40]. In the past, a summative assessment such as a written exam had more chances of preserving academic integrity. However, such invigilated assessments in MOOC require much rethinking as detecting any cheating with the current availability of technologies for students would be significantly challenging for invigilators and lecturers [41]. Hence, with the recent prevalence of technology-embraced online learning environments, there has been much debate about different assessment methods being used as learning assessments. We observed that these education paradigms offered both opportunities and constraints on student learning affecting the overall grade achieved by students of the BIT program under our study. We adopted a DDDM approach to gather data and analyze all the subjects’ assessments from both teaching perspective as well as student learning perspective to establish a more scientific ground for assessment transformations over a period of five years [42,43]. As reported in the literature [44], we considered the four categories of influencing factors as prerequisites for a successful implementation of our DDDM approach in the assessment transformations. These four factors, namely, (1) the assessment instruments and processes, (2) the role of the teacher, (3) the role of the student, and (4) the context of the learning environment, are important for assessment design as they could influence the data that are used for enhancing student learning.
The initial phase of our study involved reviewing the types of assessment methods adopted for each subject and the major assessment components that were used to assess student learning in achieving almost all of the subject learning outcomes. Such assessments of student learning contribute to the overall course grade to a great extent. We identified several challenges that arose due to the impact of a strict lockdown during the COVID-19 pandemic restricting the physical distance between the lecturers and the students. There were considerable changes in the subject delivery with the necessity of using technology for all communications apart from teaching and assessing the students. Many issues due to workload and time management adaptations surfaced with the need to provide student learning and assessment feedback through online platforms. Some of the written exam-type summative assessments were replaced by other forms such as team-based assignments, oral exams, and take-home exams, without compromising on the learning outcomes based on similar practices reported in the literature [45,46,47,48,49]. In this paper, using a DDDM approach, we focused on the impact of two major types of assessment methods, namely exam-based and nonexam assessments adopted in BIT subjects of an HE program undertaken as a case study. We considered subjects that were taught before the COVID-19 pandemic that were transformed to nonexam subjects and the impact on student learning due to the forced adoption of online delivery mode during the COVID-19 pandemic. Further, we considered the impact on student learning with blended learning environments embarked after the COVID-19 pandemic. Using a DDDM approach, we studied the effects on student learning and performance before and after the transformation. We performed an analysis on the overall course grade that reflects the student learning achieved in each subject during three scenarios: (i) before the COVID-19 pandemic with a physical classroom delivery, (ii) during the COVID-19 pandemic with only an online delivery, and (iii) after the COVID-19 pandemic with a blended delivery. This section presents the data analysis and results comparing the overall course grades achieved by the students over a period of five years.
We considered a set of 31 subjects offered in the BIT program under three categories: (i) exams removed during the COVID-19 pandemic and replaced by major assignments/projects (13 subjects), (ii) subjects that never had exams (4 subjects), and (iii) maintained exams (14 subjects). A comparison of average fail rates for all subjects in each of these three categories over ten semesters was performed, and the results are shown in Figure 1 over a period of five years from 2018 to 2022. It should be noted that apart from 13 subjects transformed to nonexam subjects, there were only 4 subjects that never had exams and 14 subjects that always had exams throughout those 5 years. Firstly, we observe that during the strict lockdown period of the COVID-19 pandemic (2020–2021), all subjects had lower fail rates. There are a number of reasons attributed to this positive student learning achievement. During the COVID-19 pandemic, many students lost their jobs and hence could focus only on studies. Further, with online delivery, a majority of the students were able to make use of their time effectively with lecturer feedback available through online platforms more quickly. This showed a good adaptability of the technology by lecturers and students adopting a variety of online learning technology platforms and collaborative learning software tools. Secondly, we observe that the “Never had exams” category showed the least change in the fail rate before, after, and during the COVID-19 pandemic. Among the four subjects that never had exams, two were capstone subjects that traditionally had very low fail rates and high rates of distinctions. Their impact on the results was quite significant for the category of “Never had exams” and hence showed low fail rates consistently. Thirdly, we observe that the trend displayed under the category “Removed exams” show that the fail rate dropped considerably during the COVID-19 pandemic in 2020 and 2021 to as low as 11% but subsequently shot up to 33% in Sem 1, 2022. One of the key reasons for this trend is that students had taken up more part-time work hours to compensate for the income lost during the COVID-19 lockdown. Since online delivery continued in Sem 1, 2022, the majority of the students would join online classes from their workplace with less attention and focus given to studies. It was a transition from the COVID-19 pandemic to the postlockdown period. However, in Sem 2, 2022 when blended delivery was introduced and more students were attending classes on campus, the fail rate started to decrease. Fourthly, we observe that the “Maintained exam” category had a negative impact after the COVID-19 pandemic. The fail rate became more than 30% in both semesters of 2022. Further, these observations are confirmed by the results obtained with the mean marks obtained for the 10 semesters, as shown in Figure 2. The “Never had exams” category shows better results with higher mean marks achieved by students than in the other two categories of subjects. However, the mean marks peaked in Sem 2, 2020 and dropped to their least values in Sem 1, 2022 and rebounded in Sem 2, 2022. The trend was similar in the other two categories too. This motivated us to further explore the data and analyze them deeper.
The transformation of subjects from exam-based to nonexam assessments was observed as a major change among lecturers and students. Hence, we analyzed the impact by studying the student performance pre- and postchange. Figure 3 provides an overview of fail rates pre- and postchange as well as summarizing 2021 and 2022 separately. We segregated the data for the period from semester 2 of 2020 when the change occurred to the end of Sem 1, 2021 from 2022 to investigate any difference in trend and the underlying factors. One of the reasons for a higher fail rate in those subjects that maintained exams in 2022 was that the student cohort started lacking study skills, as observed by the lecturers. With more opportunities for students to get help from external resources, and stricter academic integrity measures put in place, exams were becoming more challenging for students. Lecturers also adapted more effective practices in online assessments based on their online delivery experiences during the COVID-19 pandemic. Hence, in general, fail rates for all categories were higher in 2022. In particular, the failure rate for both removed exams and maintained exams increased substantially in 2022. Both these categories were clearly affected by factors which overshadowed any effect the change made in the move to nonexam assessments. The fail rate for the “Never had exam” category remained more or less the same throughout, as it was dominated by capstone students who were able to cope with final year projects, which students had always prioritized. The fail rate for both “Removed exam” and “Maintained exam” categories increased significantly in the period after the change. Further, in 2022, the failure rate for both these categories increased dramatically and in particular for those subjects that maintained exams.
Overall, we identified three main factors that impacted student learning in 2022 as follows:
(i)
The removal of a 20 h cap on work for international students onshore;
(ii)
A higher proportion of new student enrolments since the end of the international border closure due to the COVID-19 pandemic. Many of these new students were still offshore due to a slow processing of visas. They had difficulty coping with virtual classes that did not suit their time zone and could not adapt to the change in the academic culture and expectations as they were studying from remote locations;
(iii)
A high proportion of groups of students from certain overseas countries affected by the COVID-19 pandemic and other natural disasters such as floods that led to a crisis situation and mental stress for students onshore and offshore.
The teaching teams had several meetings and focus group discussions to analyze the data on student performance before the COVID-19 pandemic, during the COVID-19 pandemic, and after the COVID-19 pandemic. Various continuous improvements in teaching and assessment practices were adopted and the move to nonexam assessments were formulated and reviewed with data analysis of the prechange and postchange investigations. In particular, the following steps of the DDDM approach were successfully adapted [42]:
  • Share a data-driven practice with all stakeholders such as lecturers, students, administrators, management, industry partners, and external academics;
  • Adopt a standard data-driven process for making decisions from relevant data sources;
  • Use a simple data analysis method that captures many perspectives and deeper insights;
  • Train lecturers on the processes for data-driven teaching;
  • Use the data-driven process to identify learning goals for each assessment of a subject every semester based on the historical trend in data;
  • Collaboratively discuss student performance data with stakeholders;
  • Make data visible, tangible, and easily accessible;
  • Identify students’ individual strengths and weaknesses and plan for suitable interventions that can be implemented;
  • Educators to commit to using data for an informed decision-making in teaching;
  • Make adjustments in the data-driven process with lessons learned each semester towards improving teaching and student learning.
Once a DDDM approach was embraced, the performance measures could be selected appropriately. In this case study, the impact of assessment transformation from exam-based to nonexam assessment over a period of educational changes due to the COVID-19 pandemic were analyzed using measures such as overall fail rate and mean marks that serve as indicators of overall student learning in the BIT program. This pilot study serves as a starting point for further investigations into many other perspectives through different slicing and dicing of the data and other data-mining methods. It has led to further future investigations such as considering the impact of changes in assessment with individual subjects and by grouping related subjects under a particular major specialization in BIT. Subsequent to the study, the DDDM approach was useful for the teaching team and management of the institute to identify five more subjects for a transformation from exam-based to nonexam assessments. The main outcome of a DDDM approach in education would be to continually strive for enhancing teaching practice in order to improve student learning outcomes. Other student learning measures contributing to long-term impact such as the progression of graduates into relevant jobs could be studied to make informed decisions in redesigning the curriculum as part of future ongoing research.

5. Conclusions

It has been a worldwide practice of educators to use student data effectively as they form the key impetus to improve student learning and academic performance. However, student academic data collected per se offer limited insight into improving student learning and classroom practice. A DDDM approach could provide deeper insights to various stakeholders, in particular for making an informed decision in rethinking teaching and assessment practices.
This paper studied the impact of formative and summative assessments on student performance for decision-making to shift from exam-based to nonexam assessments in 13 subjects from a total of 31 subjects offered in an HE program as a case study. A DDDM approach was adopted in this pilot study to analyze student performance data collected at the end of each semester for a span of five years. A data analysis on the impact of change in assessments was performed for three time periods: before, during, and after the COVID-19 pandemic. Different measures, such as the overall fail rate and overall mean marks achieved by students as well as lecturer classroom experiences, were used to validate our findings. The ten steps of the DDDM process cycle adopted in this case study helped the educators in identifying the underlying factors that have influenced student learning during a volatile educational environment faced by the institute in the past five years. In addition, the DDDM approach created evidence for practice and further assisted in identifying five more subjects for an assessment transformation in the HE program.
Future research directions were also suggested for considering other sources of data and measures in the pursuit towards improving quality in teaching and student learning for the HE program undertaken for this study. Online learning experiences during the COVID-19 pandemic have transformed educational paradigms into more blended learning offering of programs. We believe that the DDDM process applied in this pilot study would serve educators well in gaining deeper insights into student data for continuous improvements in teaching and student assessments. It would also pave the way for making more informed decisions in the future towards redesigning the BIT program curriculum and the specialization subjects offered.

Author Contributions

Conceptualization, S.K. and S.V.; methodology, S.V.; formal analysis, S.K.; validation, S.K. and S.V.; investigation, S.K. and S.V.; resources, S.K.; data curation, S.K.; writing—original draft preparation, S.V.; writing—review and editing, S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy restrictions.

Acknowledgments

The authors wish to thank the institute for the support in undertaking this research study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chappuis, J.; Stiggins, R.; Chappuis, S.; Arter, J. Classroom Assessment for Student Learning: Doing It Right-Using It Well; Assessment Training Institute Inc.: Portland, OR, USA, 2004. [Google Scholar]
  2. Theobold, A.S. Oral Exams: A More Meaningful Assessment of Students’ Understanding. J. Stat. Data Sci. Educ. 2021, 29, 156–159. [Google Scholar] [CrossRef]
  3. Cramp, J.; Medlin, J.F.; Lake, P.; Sharp, C. Lessons learned from implementing remotely invigilated online exams. J. Univ. Teach. Learn. Pract. 2019, 16, 10. [Google Scholar] [CrossRef]
  4. Holden, O.L.; Norris, M.E.; Kuhlmeier, V. Academic Integrity in Online Assessment: A Research Review. Front. Educ. 2021, 6, 639814. [Google Scholar] [CrossRef]
  5. Shariffuddin, S.A.; Ibrahim, I.S.A.; Shaaidi, W.R.W.; Syukor, F.D.M.; Hussain, J. Academic Dishonesty in Online Assessment from Tertiary Students’ Perspective. Int. J. Adv. Res. Educ. Soc. 2022, 4, 75–84. [Google Scholar]
  6. Vanlommel, K.; Gasse, R.V.; Vanhoof, J.; Petegem, P.V. Teachers’ decision-making: Data based or intuition driven? Int. J. Educ. Res. 2017, 83, 75–83. [Google Scholar] [CrossRef]
  7. Worthen, B.R.; Borg, W.R.; White, K.R. Measurement and Evaluation in the Schools; Longman: White Plains, NY, USA, 1993. [Google Scholar]
  8. Bloom, B.; Englehart, M.; Furst, E.; Hill, W.; Krathwohl, D. Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook I: Cognitive Domain; Longmans, Green: New York, NY, USA; Toronto, OH, USA, 1956. [Google Scholar]
  9. Krathwohl, D.R.; Bloom, B.S.; Masia, B.B. Taxonomy of Educational Objectives: The Classification of Educational Goals, Handbook II: Affective Domain; David Mckay Company Inc.: New York, NY, USA, 1964. [Google Scholar]
  10. Atkinson, L. Teachers’ Experiences with the Data-Driven Decision Making Process in Increasing Students’ Reading Achievement in a Title I Elementary Public School. Ph.D. Thesis, Concordia University Chicago, River Forest, IL, USA, 2015. [Google Scholar]
  11. Serafini, F. Three paradigms of assessment: Measurement, procedure, and inquiry. Read. Teach. 2000, 54, 384–393. [Google Scholar]
  12. Mokhtari, K.; Rosemary, C.A.; Edwards, P.A. Making Instructional Decisions Based on Data: What, How, and Why. Read. Teach. 2007, 61, 354–359. [Google Scholar] [CrossRef]
  13. Brecklin, T.A. Data-Driven Decision-Making: A Case Study of How a School District Uses Data to Inform Reading Instruction. 2010. Available online: https://epublications.marquette.edu/cgi/viewcontent.cgi?article=1039&context=dissertations_mu (accessed on 11 May 2022).
  14. Farr, B.P.; Trumbull, E. Assessment Alternatives for Diverse Classrooms; Christopher-Gordon Publishers, Inc.: Norwood, MA, USA, 1997. [Google Scholar]
  15. Popham, W.J. Uses and misuses of standardized tests. NASSP Bull. 2001, 85, 24–31. [Google Scholar] [CrossRef]
  16. Mandinach, E.B.; Schildkamp, K. Misconceptions about data-based decision making in education: An exploration of the literature. Stud. Educ. Eval. 2021, 69, 100842. [Google Scholar] [CrossRef]
  17. Kurilovas, E. On data-driven decision-making for quality education. Comput. Hum. Behav. 2020, 107, 105774. [Google Scholar] [CrossRef]
  18. Botvin, M.; Hershkovitz, A.; Forkosh-Baruch, A. Data-driven decision-making in emergency remote teaching. Educ. Inf. Technol. 2023, 28, 489–506. [Google Scholar] [CrossRef] [PubMed]
  19. Illeris, K. The Three Dimensions of Learning: Contemporary Learning Theory in the Tension Field between the Cognitive, the Emotional and the Social; Roskilde University Press: Copenhagen, Denmark, 2002. [Google Scholar]
  20. Pérez, M.; Ayerdi, V.; Arroyo, Z. Students Engagement and Learning through the Development of Didactic Models for Mechanical Engineering. Univers. J. Educ. Res. 2018, 6, 2300–2309. [Google Scholar]
  21. Philp, J.; Duchesne, S. Exploring Engagement in Tasks in the Language Classroom. Annu. Rev. Appl. Linguist. 2016, 36, 50–72. [Google Scholar] [CrossRef] [Green Version]
  22. Handelsman, M.M.; Briggs, W.L.; Sullivan, N.; Towler, A. A Measure of College Student Course Engagement. J. Educ. Res. 2005, 98, 184–191. [Google Scholar] [CrossRef]
  23. Dixson, M. Measuring Student Engagement in the Online Course: The Online Student Engagement Scale (OSE). Online Learn. 2015, 19, n4. [Google Scholar] [CrossRef] [Green Version]
  24. Boud, D.; Falchikov, N. Aligning assessment with long-term learning. Assess. Eval. High. Educ. 2006, 31, 399–413. [Google Scholar] [CrossRef] [Green Version]
  25. Biggs, J.; Tang, C. Using Constructive Alignment in Outcomes-Based Teaching and Learning. In Teaching for Quality Learning at University, 3rd ed.; Open University Press: Maidenhead, UK, 2007; pp. 50–63. [Google Scholar]
  26. Norcini, J.; Anderson, B.; Bollela, V.; Burch, V.; Costa, M.J.; Duvivier, R.; Galbraith, R.; Hays, R.; Kent, A.; Perrott, V.; et al. Criteria for good assessment: Consensus statement and recommendations from the Ottawa 2010 Conference. Med. Teach. 2011, 33, 206–214. [Google Scholar] [CrossRef]
  27. Norcini, J.; Anderson, M.B.; Bollela, V.; Burch, V.; Costa, M.J.; Duvivier, R.; Hays, R.; Palacios Mackay, M.F.; Roberts, T.; Swanson, D. Consensus framework for good assessment. Med. Teach. 2018, 40, 1102–1109. [Google Scholar] [CrossRef]
  28. Wei, X.; Saab, N.; Admiraal, W.F. Assessment of cognitive, behavioral, and affective learning outcomes in massive open online courses: A systematic literature review. Comput. Educ. 2021, 163, 104097. [Google Scholar] [CrossRef]
  29. Pilli, O.; Admiraal, W.F. Students’ learning outcomes in massive open online courses (MOOCs): Some suggestions for course design. J. High. Educ. 2017, 7, 46–71. [Google Scholar]
  30. Zhu, M.; Sari, A.; Lee, M.M. A systematic review of research methods and topics of the empirical MOOC literature (2014–2016). Internet High. Educ. 2018, 37, 31–39. [Google Scholar] [CrossRef]
  31. Deng, R.; Benckendorff, P.; Gannaway, D. Linking learner factors, teaching context, and engagement patterns with MOOC learning outcomes. J. Comput. Assist. Learn. 2020, 36, 688–708. [Google Scholar] [CrossRef]
  32. Kennedy, K.; Nowak, S.; Raghuraman, R.; Thomas, J.; Davis, S. Academic dishonesty and distance learning: Student and faculty views. Coll. Stud. J. 2000, 34, 309–314. [Google Scholar]
  33. Christie, B. Designing Online Courses to Discourage Dishonesty: Incorporate a Multilayered Approach to Promote Honest Student Learning. Educ. Q. 2003, 11, 54–58. [Google Scholar]
  34. Rogers, C. Faculty perceptions about e-cheating during online testing. J. Comput. Sci. Coll. 2006, 22, 206–212. [Google Scholar]
  35. Cerimagic, S.; Rabiul Hasan, M. Online exam vigilantes at Australian universities: Student academic fraudulence and the role of universities to counteract. Univers. J. Educ. Res. 2019, 7, 929–936. [Google Scholar] [CrossRef]
  36. Barnes, C.; Paris, B. An Analysis of Academic Integrity Techniques Used in Online Courses at A Southern University. 2013. Available online: https://www.researchgate.net/publication/264000798_an_analysis_of_academic_integrity_techniques_used_in_online_courses_at_a_southern_university (accessed on 12 September 2022).
  37. Hylton, K.; Levy, Y.; Dringus, L.P. Utilizing webcam-based proctoring to deter misconduct in online exams. Comput. Educ. 2016, 92–93, 53–63. [Google Scholar] [CrossRef]
  38. Bilen, E.; Matros, A. Online cheating amid COVID-19. J. Econ. Behav. Organ. 2021, 182, 196–211. [Google Scholar] [CrossRef]
  39. Henderson, J.; Corry, M. Data literacy training and use for educational professionals. J. Res. Innov. Teach. Learn. 2021, 14, 232–244. [Google Scholar] [CrossRef]
  40. Walsh, L.L.; Lichti, D.A.; Zambrano-Varghese, C.M.; Borgaonkar, A.D.; Sodhi, J.S.; Moon, S.; Wester, E.R.; Callis-Duehl, K.L. Why and how science students in the United States think their peers cheat more frequently online: Perspectives during the COVID-19 pandemic. Int. J. Educ. Integr. 2021, 17, 23. [Google Scholar] [CrossRef]
  41. Arango-Caro, S.; Walsh, L.L.; Wester, E.R.; Callis-Duehl, K. The Role of Educational Technology on Mitigating the Impact of the COVID-19 Pandemic on Teaching and Learning. In Technologies in Biomedical and Life Sciences Education. Methods in Physiology; Springer: Cham, Switzerland, 2022; Volume 45, pp. 1–490. [Google Scholar]
  42. James, R. A Multi-Site Case Study: Acculturating Middle Schools to Use Data-Driven Instruction for Improved Student Achievement. Ph.D. Dissertation, Virginia Tech, Blacksburg, VA, USA, 2010. [Google Scholar]
  43. Simon, L.E.; Kloepper, M.L.; Genova, L.E.; Kloepper, K.D. Promoting Student Learning and Engagement: Data-Supported Strategies from an Asynchronous Course for Nonmajors. In Advances in Online Chemistry Education; American Chemical Society: Washington, DC, USA, 2021; pp. 1–19. [Google Scholar]
  44. Hoogland, I.; Schildkamp, K.; Van der Kleij, F.; Heitink, M.; Kippers, W.; Veldkamp, B.; Dijkstra, A.M. Prerequisites for data-based decision making in the classroom: Research evidence and practical illustrations. Teach. Teach. Educ. 2016, 60, 377–386. [Google Scholar] [CrossRef]
  45. Belmonte, I.; Borges, A.V.; Garcia, I.T.S. Adaptation of Physical Chemistry Course in COVID-19 Period: Reflections on Peer Instruction and Team-Based Learning. J. Chem. Educ. 2022, 99, 2252–2258. [Google Scholar] [CrossRef]
  46. Jacobs, A.D. Utilizing Take-Home Examinations in Upper-Level Analytical Lecture Courses in the Wake of the COVID-19 Pandemic. J. Chem. Educ. 2021, 98, 689–693. [Google Scholar] [CrossRef]
  47. Lubarda, M.; Delson, N.; Schurgers, C.; Ghazinejad, M.; Baghdadchi, S.; Phan, A.; Minnes, M.; Relaford-Doyle, J.; Klement, L.; Sandoval, C.; et al. Oral exams for large-enrollment engineering courses to promote academic integrity and student engagement during remote instruction. In Proceedings of the 2021 IEEE Frontiers in Education Conference, Lincoln, NE, USA, 13–16 October 2021; pp. 1–5. [Google Scholar]
  48. Kamber, D.N. Personalized Distance-Learning Experience through Virtual Oral Examinations in an Undergraduate Biochemistry Course. J. Chem. Educ. 2021, 98, 395–399. [Google Scholar] [CrossRef]
  49. Balasubramanian, B.; DeSantis, C.; Gulotta, M. Assessment à la Mode: Implementing an Adaptable Large-Scale Multivariant Online Deferred-Grade Exam for Virtual Learning. J. Chem. Educ. 2020, 97, 4297–4302. [Google Scholar] [CrossRef]
Figure 1. Comparison of fail rate over ten semesters.
Figure 1. Comparison of fail rate over ten semesters.
Systems 11 00306 g001
Figure 2. Comparison of mean marks obtained over ten semesters.
Figure 2. Comparison of mean marks obtained over ten semesters.
Systems 11 00306 g002
Figure 3. Comparison of fail rate before and after the change to nonexam assessments.
Figure 3. Comparison of fail rate before and after the change to nonexam assessments.
Systems 11 00306 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kaspi, S.; Venkatraman, S. Data-Driven Decision-Making (DDDM) for Higher Education Assessments: A Case Study. Systems 2023, 11, 306. https://doi.org/10.3390/systems11060306

AMA Style

Kaspi S, Venkatraman S. Data-Driven Decision-Making (DDDM) for Higher Education Assessments: A Case Study. Systems. 2023; 11(6):306. https://doi.org/10.3390/systems11060306

Chicago/Turabian Style

Kaspi, Samuel, and Sitalakshmi Venkatraman. 2023. "Data-Driven Decision-Making (DDDM) for Higher Education Assessments: A Case Study" Systems 11, no. 6: 306. https://doi.org/10.3390/systems11060306

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop