Next Article in Journal
Energy Crisis in Pakistan and Economic Progress: Decoupling the Impact of Coal Energy Consumption in Power and Brick Kilns
Next Article in Special Issue
A New Fractional-Order Chaotic System with Its Analysis, Synchronization, and Circuit Realization for Secure Communication Applications
Previous Article in Journal
Imputation for Repeated Bounded Outcome Data: Statistical and Machine-Learning Approaches
Previous Article in Special Issue
Study of the Boundary Value Problems for Nonlinear Wave Equations on Domains with a Complex Structure of the Boundary and Prehistory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamical Continuous Discrete Assessment of Competencies Achievement: An Approach to Continuous Assessment

by
Luis-M. Sánchez-Ruiz
*,
Santiago Moll-López
,
Jose-Antonio Moraño-Fernández
and
María-Dolores Roselló
Departamento de Matemática Aplicada, Universitat Politècnica de València, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(17), 2082; https://doi.org/10.3390/math9172082
Submission received: 4 August 2021 / Revised: 23 August 2021 / Accepted: 24 August 2021 / Published: 28 August 2021
(This article belongs to the Special Issue Advanced Methods in Computational Mathematical Physics)

Abstract

:
Learning is a non-deterministic complex dynamical system where students transform inputs (classes, assignments, personal work, gamification activities, etc.) into outcomes (acquired knowledge, skills, and competencies). In the process, students generate outputs in a variety of ways (exams, tests, portfolios, etc.). The result of these outputs is a grade aimed at measuring the (level of) competencies achieved by each student. We revisit the relevance of continuous assessment to obtain this grading. We simultaneously investigate the generated outputs in different moments as modifiers of the system itself, since they may reveal a variation of the level of competencies achievement previously assessed. This is a novelty in the literature, and a cornerstone of our methodology. This process is called a Dynamical Continuous Discrete assessment, which is a form of blended assessment that may be used under traditional or blended learning environments. This article provides an 11-year perspective of applying this Dynamical Continuous Discrete assessment in a Mathematics class for aerospace engineering students, as well as the students’ perception of continuous assessments.

1. Introduction

Learning is a complex dynamical process in which students gain knowledge, improve, or acquire skills, achieve, or increase competencies, and change or modify habits [1,2,3,4,5]. The learning process runs throughout various activities, such as master classes, problem solving, gamification activities, or lab sessions, where students play a more or less active role with student-centered activities, such as project-based learning, flipped teaching (FT) [6,7,8,9], or a combination of these, with the so-called blended learning (BL) methodologies [10,11,12,13,14,15,16]. Indeed, BL and educational technology have proven to be crucial in the university resilience to cope with the COVID-19 pandemic that irrupted the 2020 academic year [17,18,19,20,21,22,23,24].
The students learning process, whatever system and circumstances rule it, leads to a grade. This grading is an extremely important part of the teachers’ job as students understand it as a measure of their academic success [25].
At the university level, under teacher-centered methodologies, the assessment traditionally targeted concepts mastery by students, but with the outreach of competency-based education [26,27,28,29], some form of continuous assessment (CA) is usually run to provide a grade that reflects the level of competencies achievement [30,31,32]. CA paradigm finds a wide variety of forms to be implemented at the university level considering their outcomes (exams, tests, projects, assignments, portfolio, essays, presentations, etc.) [30,31,32,33,34]. On the other hand, CA has been extensively used in schools [35,36,37,38], potentially due to its formative feature. Within this context and related to getting data on the students’ learning process, Elliott/Resing/Beckmann [36] distinguish between dynamic testing and dynamic assessment, the former being of particular interest for academic researchers in psychology with a focus on the study of reasoning and problem-solving, and the latter for those having a practitioner orientation and tending to be particularly concerned with exploring the ways by which assessment data can inform educational practice. Both dynamic testing and dynamic assessment, fit within constructive alignment [39], which is an outcome-based approach that requires adjusting of teaching and assessment.
Regardless of the university or school level at which it is run, this alignment should address three core issues [40]: What competencies should the students achieve? What will the students do to achieve these competencies? How can the students’ competencies be evaluated?
This paper addresses the last question. The authors sought a CA method that evaluates each student, follows their progress [41,42,43], and looks for the enhancement of its formative capabilities by helping and encouraging students to reach and improve their expected competencies throughout, and at the end, of the course. Partial results, presented in [44,45], are fully formalized here, and called the Dynamical Continuous Discrete (DCD) assessment (A). DCDA embraces the idea that each assessment and output is itself an input into the learning process, and we must check if ulterior outputs reflect that the assessment of prior topics does match with the degree of achievement of competencies that have already been assessed. The DCDA system/method is a novelty that combines the known CA paradigm, which has been widely considered in the literature [30,31,32,33,42], with a system based on taking into consideration the chains of topics that relate to each other in a discrete dynamical sense to confirm or reassess the level of competencies achieved.
We illustrate the DCD approach to CA in a STEM subject, particularly in mathematics, however DCDA may be applied to any discipline where chains of topics exist. Here, we use the word ‘chain’ with the mathematical meaning of a partially ordered set, i.e., some topics and their competencies must be achieved before handling others, but there may be pairs of topics in which there is no pre-established order between them. We provide an 11-year experience applying DCDA and include an interesting and novel perception of CA from students who seems to support the merits of DCDA.
This paper is organized as follows. In Section 2, we revisit the CA setting and detail the group of students that have provided their perception on CA, Section 3 describes the DCD essence, and Section 4 describes how this CA approach has been implemented in a first-year mathematics of an engineering degree. Section 5, Section 6 and Section 7 gather the results, discussion, and conclusions, respectively.

2. Materials and Methods

2.1. Assessment and Continuous Assessment

Terenzini [46] went to the assessment backbone when asked about what the purpose of an assessment is, what is to be the assessment level, and what is to be assessed. He concluded that its primary purpose, at its purest level, was the improvement of learning. For most students, the purpose of the assessment is just to measure the level of competence achieved by them. In fact, a key aspect in education is knowing each student’s level of achieved competencies, and this is done through different methods of assessment where the teachers obtain information about the students’ performance.
When assessment goes further, it aims to guarantee that the expected competencies after a course have indeed been achieved. For this purpose, the gathering of information by means of different activities is essential, and it should have some effect on the learning process, [47]. To become a continuous assessment, it should be quite more than just systematic, accumulative, or guidance-oriented, as requested by Ezewu/Okoye [48]. We embrace these two opinions along with others in the literature. Moreover, we think that guidance is a core idea to understand the learning process as a complex dynamic system as it provides an input that we must consider when ulterior assessments rely on previously assessed topics and use it properly when required. Consequently, CA should include some assessment procedure so that it contributes to the competencies’ achievement versus a mere acquisition of concepts.
The main advantages of CA are in boosting student motivation, strengthening the practice and effectiveness of feedback, and helping students to become self-reflective learners. The challenges of CA are time costs, and that some students may suffer from anxiety from being continuously evaluated, as pointed out by Bjælde/Jørgensen/Lindberg [49]. Weighing up the pros and cons, the authors agree with the idea that the advantages of CA seem to outweigh its challenges.

2.2. Formative and Summative Assessment

In the authors’ opinion, fruitful CA should focus on each learner’s learning process and on assessment of the learning outcomes.
CA is usually formative and summative: The aim of formative assessment is to monitor the students’ learning process and provide feedback that helps both students and instructors to identify the level of competencies achievement by students. Whereas, summative assessment aims to evaluate the level of achievement of competencies at some given moments of the learning process by comparing the outcomes against some standard or rubric [50,51].
Conceptually, this feedback is essential so that instructors may play their coaching role in a scenario where students are the main character of the learning process. Under this perspective, CA becomes formative and does not have into account just the outcomes of the learner, in which case it might be considered mostly if not exclusively summative.

2.3. Competencies Assessment

We will focus on the assessment of mathematics competencies of engineering students. Nevertheless, most of its content may be extrapolated to other fields.
The KOM project [52] develops the idea that assessment of mathematics competencies should be based on a number of disparate activities, in addition to which we cannot dismiss the need of having some feedback method aimed at estimating the learning process of each learner [53]. For this purpose, the KOM project introduces three dimensions for each competency that are to be considered: degree of coverage, radius of action, and technical level. They are meant to measure, respectively: How wide the aspects that characterize a competency are mastered; in what situations a person can execute it; and the relevant tools and concepts related to it.
Indeed, the KOM project pays priority attention to implementing some procedure that enables to follow how students advance in their mathematical competencies achievement through their education system. An adequate design of activities fulfills several purposes in this sense, as they may improve the level of more than one of the targeted dimensions, or achieve target competencies.
The output of these activities is not deterministic because of what the authors view as some stochastic nature in the learning process, due to a number of reasons, e.g., academic background, level of achieved competencies, personal reasons or time constraints, and thus, the representation of assessments usually follows some typical statistics distribution.

2.4. Questionnaire

To get feedback of the students’ perception of CA in general and the features of DCDA in particular, we have requested the opinion from students evaluated with DCDA from Higher Technical School of Design Engineering (ETSID) at Technical University of Valencia (UPV), by using a questionnaire as a research method, see supplementary material.
The questionnaire was created ad hoc, reflecting the objectives and matters of this study and considering the experiences and research carried out in [54,55]. To do this questionnaire, the expert opinion technique was used.
No personal information was gathered since it was intended to guarantee anonymity and promote freedom of response. The questionnaire consisted of 20 questions, using Likert scales, one multiple-choice question (about the academic year), and 1 open question. The questions related to DCDA were established with a 4- and 5-level Likert scale (1 to 4 or 5, depending on importance) with questions about students’ perception and attitude towards the assessment strategies applied in higher education.

2.5. Participants and Data Collection and Management

A total of 484 students of Mathematics I from the Bachelor’s Degree in Aerospace Engineering during the 2017/18, 2018/19, 2019/20, and 2020/21 academic years received an invitation to fill out a questionnaire, of which 355 completed it. Therefore, the sample size represents 73.34% of the students in the aforementioned subject. Participation was distributed as follows: Seventy-six respondents from the academic year 2017/18 (21.4%), 84 from the 2018/19 period (23.6%), 95 from 2019/20 (26.7%), and 100 from 2020/21 (28.2%). Their ages ranged from 18 to 21 years old. The online questionnaire was distributed online via the university platform PoliformaT, which is based on the Sakai learning management system, to which the students were used to. The survey was not mandatory, and students were informed it was anonymous, and that they could stop filling it at any moment they wanted.
The R software (version 4.1.1) [56] was used for data treatment, and Excel software (Office Professional Plus 2019) [57] was used to obtain the graphs, as shown in the figures.

3. Fostering Competencies Achievement with DCDA Hatchling

In Section 2, we provided a timely insight into the CA paradigm. It is not unusual to find situations where some practitioners call their assessment continuous, based on whether they set a final exam. However, to ensure a class is not just summative, it should not limit itself to increase the number of partial exams—in which case CA must be aware of the hazard that uncontrolled CA may cause some inflation in grading, due to shallow learning, as pointed out in [58]. Consequently, CA should be more than just a mechanism by which students receive their final grade simply by adding weighted evaluations of their performance during a course.
The essence of DCDA relies on identifying some chains of topics (knots) where each intermediate or final uses competencies of knots located at anterior knots. To apply DCDA, it is essential to identify the chain(s) of knots as a directed graph showing the influence/dependence of knots with each other.
DCDA falls within the formative category as assessments are inputs that affect each student’s learning process to improve their output subject, and indeed, the previous assessments may be modified if the learner shows an improvement in the competencies assessed when used in ulterior knots.
In a pure CA model, each assessment should evaluate all the competencies from the beginning of the course. However, time constraints make this option unfeasible. For this reason, DCDA restricts its scope in general to anterior knots in the chain to the one evaluated at some given moment. This approach shows that while achieving a competency, a learner may acquire a higher mastery in previously assessed competencies because the learning process has got a recursive and accumulative nature. Thus, DCDA seeks to confirm the level of previously assessed competencies in the chain, and if there is some upkeep, improvement, or decline in them. In our opinion, we should understand performing activities and generating outputs are stochastic processes in which different students generate different results. This happens even when they are in the same circumstances, due to a number of reasons, such as individual perceptions, academic background, personal situations, or silly errors that disguise their real competence level.
In the case of loss of level of competencies, it may be either the consequence of their degradation or inaccuracy in the assessment process. In the former case, some specific activities should be recommended to overcome the loss; in the latter, the reason for the inaccuracy should be identified and corrected for the future. Analogously, in case of improvement, there may be a real improvement of competencies or an undetected failure in the previous assessment, in which case the authors consider it fair to modify the previous knot grading so that it reveals its real level of competence. This should be done in relevant moments of evaluation with a higher weight that does not require just shallow learning related to previously assessed knots of relevance in the final grading.
In a competencies-based assessment, a pass should be awarded to students that reach a threshold where all competencies are assessed. In this sense, DCDA becomes a useful tool as it is continuously monitoring the (level of) competencies achievement, recognizing when previous competencies located in the chains of knots have improved, and proposing reinforcement activities in an individualized manner if adequate.
DCDA may become a form of blended assessment, where different methods of assessment are used, including inter alia, digital, collaborative, tests, and open questions [59,60]. This must be increasingly standardized as a new generation of students reaches university with new ways of learning, [61,62,63,64] and new activities that appeal to them [65].
We call this process Dynamical Continuous Discrete Assessment:
  • Dynamical because the grading at some knots may be modified if the competence there assessed shows improvement in ulterior knots.
  • Continuous because this falls within the standard CA paradigm.
  • Discrete to recall that indeed all activities are run at specific moments, some of which may have high relevance. Nevertheless, this relevance should not be stressful if adequate activities have been taken by students in advance.

4. A DCDA Implementation Case

In this article, we exemplify DCDA using a mathematics class. However, DCDA may be applied to any subject where the instructors may identify some chains of topics (knots) that have a natural and clear order in the learning process.
The case presented has followed DCDA in an 11-year period. It corresponds to the annual mathematics of BEng Aerospace Engineering at UPV. Its outreach remains mostly at Level 1, as referenced in [66] (pp. 29–36), though some of them reach Level 2 to address other aerospace engineering subjects. No Level 3 is required because there are other mathematics subjects in ensuing courses.
The competencies pursued with this subject include the ability to apply knowledge about linear algebra, differential geometry, differential and integral calculus, and an introduction to differential equations and numerical methods. To achieve them and assess the learning process, a number of different activities are scheduled, recently with an increasing number of them via digital means and flipped methodology [24]:
  • Theory reading, understanding, and applied to problems solving;
  • Lab practice, with weekly sessions and individual exams;
  • Written exercises;
  • Challenging activities, such as quizzes, either computer-aided or collaboratively executed;
  • Gamified activities, such as escape rooms to promote the strengthening of the mathematical competencies and to boost positive emotions and motivation.
We distinguish four main blocks of competencies to be achieved in different topics within this subject:
  • Calculus I (C1) dealing with one real variable function and its applications;
  • Linear Algebra (Al) dealing with vector spaces, matrices, and diagonalization;
  • Calculus II (C2) dealing with real functions of several variables and its applications, including differentiation, multiple, and surface integration;
  • Series (S) dealing with numerical, power, and Fourier series.
These four blocks have got some relevant assessment exams to be executed at fixed dates at the beginning of the course and with fixed weights to conform, jointly with a set of autonomous activities, also assessed, and settled to facilitate the assessment of the learning process itself, a weight wTP into the final grade (FG) of the subject. The subject of this contribution is referred to as TP. Regarding TP, four individual written exams are established: TP1 (C1), TP2 (Al), TP3 (C2), and TP4 (S, C1, C2), where the covered competencies are indicated in parentheses.
TP assessment is implemented with wTP = 80% in FG, which includes written exams and individual, collaborative, and game-based learning activities, such as escape rooms, covering the four aforementioned blocks.
Lab Practice (LP) is also accounted for in the assessment process with a wLP weight equal to 20% in FG. The LP competencies are achieved following a flipped learning methodology with weekly assessed sessions, and evaluated via a LE1 exam covering C1 at the end of the first semester, and LE2 covering Al, C2, and S, at the end of the course.
With these elements, the following chains of topics are recognized:
  • TP1 → LE1 → LE2
  • TP1 → TP3 → TP4 → LE2
  • TP2 → LE2
With this in mind, we may draw the graph presented in Figure 1.
The direction of the time scale is represented on the left side of the graph. The different evaluation moments have been represented approximately in the corresponding schooling period, at different heights, so that the reader is able to establish a chronological order. When the limits of an item are extended along the time scale, it means that the indicated activities are carried out throughout the academic year. There are more relationships than those reflected in Figure 1. Moments of assessment represented in the graph mostly represent the precedence relationship between their components.
The most important knots within TP are TP1, TP2, TP3, and TP4, which represent 90% of the wTP. Their weight during 2020—21 was 10%, 21%, 19%, and 40%, respectively. The remaining 10% is given to the activities developed during the course meant to assess and help following the whole learning process. Al is not specifically reassessed in TP4 unless the learner has shown a failure in its competencies which has rarely happened in this 11-year period. Indeed, Al has been continuously reassessed through the second semester since TP2 holds in January, Al lab weekly sessions run throughout February–March, and LE2 includes 45% of its content restricted to Al questions. This fact is indicated by a grey dotted arrow in Figure 1.
Double-ended arrows mean that the dynamical part of DCDA applies systematically. Moreover, in addition to having some specific questions, related to previous knots in the chain, an improvement in the competencies related is considered to modify previous gradings of those competencies. The authors believe that this updating in the grading must keep some balance between the level of competencies finally achieved and a compromise with following an adequate pace to achieve them when expected. DCDA is blended learning that must be used wisely and clearly so that students get motivation in getting competencies at the right time, and in case of failure for whatever reason, are motivated to improve them as additionally to help them to surpass ulterior topics, to have a chance to improve their assessment.
Figure 1 gathers in simplified form a complex problem as it is assessing all the competencies achieved in an annual subject. Semester subjects may have fewer chains and be easier to handle. Regardless, DCDA is a flexible methodology that allows adaptations and can be modified depending on intrinsic constraints within different regulations.

5. Results

5.1. An 11-Year Perspective with DCD Assessment

Computing algorithms have been implemented by different spreadsheets to implement DCDA along an 11-year period in first-year mathematics of aerospace engineering. Their results are gathered in Figure 2.
In Figure 2, the numbers indicate the corresponding numbers of students that passed, failed, or dropped out of Mathematics I each academic year from 2010 to 2021. From Figure 2, we follow that the success rate has continuously kept around 92–93% and always over 90% during this 11-year period of DCDA.
Importantly, this assessment practice seems to have motivated students in continuously trying to improve, showing them the chains of knots where they could show their improvement. Dropping out has become the exception, this steadily keeping below 5% of students even reaching a zero level at some given year.
This method to perform a CA of students admits variations with different sets of weighted arches in Figure 1, depending on the depth of each activity. It may be applied during the course or preferably at the end when students have shown that competencies in the different parts of the subject have been achieved.

5.2. Students’ Perception

The FG of the surveyed students, obtained as a result of DCDA, is distributed as indicated in Figure 3 with percentages of the DCDA final grades achieved in the whole group. Similar percentages are found in each of the academic periods.
To collect the student’s opinion of the DCDA strategy, they were asked about the representativeness of the grade obtained in relation to the self-perceived competencies developed during the year, obtaining the following results, shown in Figure 4. Approximately 35% of the students thought the grade awarded in the formative assessment was very representative of the knowledge and skills acquired, 45.9% of the students thought it was representative, 18.9% thought it was not representative (less than they expected), and no student thought it was not representative (more than they expected).
Representativeness is subjective, because no student thought they have a higher grade than deserved. Figure 5 shows the distribution of the self-perceived representativeness of the knowledge and competencies acquired depending on the final grade. It should be noted that the final grade, valued from 0 to 10 (where 0 means not having reached/shown any competencies and 10 means having demonstrated the maximum degree of assessed competencies) has been divided into four intervals: The first interval being 0–4.99, corresponding to Fail, and the remaining three intervals are 5–6.99 (Pass/enough level of competencies), 7–8.99 (GoodVery Good/medium-high level of competencies), and 9–10 (Excellent/high level of competencies).
When the students were asked about the most convenient kind of assessment (formative or summative) for competencies acquisition, the results, shown in Figure 6, showed a significant majority preference for the formative assessment (94.3%) versus the summative assessment (5.7%).
In reference to the formative activity carried out in the TP4 exam, oriented to demonstrate that the necessary competencies of TP1 and TP3 were achieved, 60.6% valued this moment of evaluation very positively, 38% valued it positively, 0.8% negatively, and 0.6% very negatively (Figure 7).
Students’ perception of the effect of being able to show how TP1 and TP3 related competencies had evolved in the knot TP4 can be seen in Figure 8. However, it is important to emphasize that this appreciation depended on the level of competencies achieved by each student—which introduces more variability in the answers. Indeed, 29.9% thought their final assessment was greatly improved after TP4, 56.1% thought the demonstration of their competencies improved their final evaluation somewhat, 5.6% thought it had no effect, and 8.5% thought it worsened its final grade.
Regarding the use of different assessment strategies and activities during the course (TP and LP tests, digital games, and tests), 62% of the students thought it had been very helpful for their development of competencies, 35.2% thought it had been helpful, 1.1% thought it had not been helpful (level of competencies remains the same), and 1.7% thought it had not been helpful at all (worsened) (see Figure 9). This last result is because each student reacted differently to activities based on many previous personal characteristics and competencies. Some activities require competencies that have not yet been adequately developed or acquired, and this worsens the performance in other activities, which reinforces the active/retroactive nature of the assessment strategy employed.
In reference to the general opinion of whether this methodology helps in the development and improvement of mathematical competencies (Figure 10), 77.2% thought it had helped them to improve a lot, 18.6% thought it had helped them to improve a little, 4.2% thought it had not helped them to improve much, and no students thought it had worsened.
Figure 11 shows the distribution of the perception of improvement in mathematical competencies, based on the final grade obtained. It should be noted that even in the case of not having reached the competencies, the perception is of improvement.
Finally, the general opinion about the DCDA methodology employed was mostly positive (86.2% = 49.9% very positive + 36.3% positive), 12.1% neutral, 0.8% negatively, and 0.8% very negatively (Figure 12).
The responses to questions 3, 4, 7, and 8 of the questionnaire are attached as an appendix in Supplementary Materials. This refers to the representativeness of the FG regarding the competencies achieved, the type of assessment (formative or summative), the use of different assessment methodologies (TP and LP tests, digital games, and activities), and the usefulness of the selected DCDA strategy (Figure 4, Figure 5 and Figure 6 and Figure 9, Figure 10 and Figure 11). Our results show a positive perception of DCDA as a facilitator to gain knowledge and skills, and as a method to evaluate the competencies achieved at the end of the course.

6. Discussion

Properly grading each student is a complex problem and requires assessment techniques to ensure that the final grade treats each student fairly and really reflects the level of achievement.
For this objective, DCDA has proved to be a challenging CA that simultaneously assesses the learning process, evaluates the level of achievement by students throughout the course, and in the end, encourages and motivates for improvement. DCDA is summative, but has a formative essence in its conception. It is fully aligned with the continuous nature of the learning process, examines the activities execution, and continually checks whether competencies are being achieved, improved, or not.
This is the core idea of DCDA, and to avoid shallow learning, it takes advantage of existing chains of topics to reassess if previously assessed competencies matched with their ulterior use.
In addition to its motivational advantages, the evaluation of the DCDA strategy must be based on whether the numerical result of the final grade reflects an improvement to recognize the level of the competencies achieved by the students. Hence, the academic results of the students, the perception that the students have about the evaluation strategy and its adaptability are some key factors to look at, too.
The application of a formative assessment must entail a significant capacity for improvement of the learning process. An approximation for the evaluation of this improvement is the subjective perception of the students.
The opinions of students have yielded very favorable results regarding the perception of improvement in competencies and effective learning. Indeed, most of them thought that the final grade in the formative assessment was representative of the knowledge and competencies acquired during the academic year. Regarding the achievement of the necessary mathematical competencies in the syllabus of the subject, almost all of them thought that the assessment strategy had helped their learning. However, there were a minority of students who thought it did not help. The students highly appreciated the activities carried out to demonstrate their evolution in previously assessed competencies (such as TP4, discussed in Section 6) in the assessment strategy. Therefore, they tended to think that this benefited the acquisition of competencies, and improved their final grade.
Hence, the DCDA system has provided an enriching experience of following the learning process continuously, and a motivating factor to achieve and improve the achieved competencies for students. It fairly assesses the final level of competencies of each student, and of the overall course. DCDA may be applied to any course where chains of topics do exist, and there is a (repeated) use of previously assessed competencies.

7. Conclusions

Paradigm shifts have existed throughout history as situations and conception evolve, and we must adjust accordingly [67]. DCDA could contribute to adapting CA paradigms as new forms of learning (FT, BL), already prominent in higher education.
DCDA is a form of blended assessment intended to encourage students to improve their achieved competencies and recognize when this positive evolution happens.
Our experience in this 11-year implementation has been very positive. It has evolved minor details that identify previously assessed competencies that do not match ulterior performances, by showing better or worse mastery than expected. This enables us to adjust previously graded competencies or suggest activities as required.
In summary, DCDA has shown to be a tool that:
-
Encourages improvement of previously assessed competencies in the chains of topics,
-
Facilitates that all expected levels of competencies are achieved,
-
Has received a favorable perception by students evaluated with it,
-
Is flexible enough so that different instructors may apply it in a sensible way considering the structure of the course and each institution regulations,
-
Facilitates awareness of shallow learning hazards.
We suggest that future research concentrates on detecting and suggesting new types of activities, including gaming, that may appeal to new university students, and tackles deep learning, which could be aligned to the learning process of students and their assessment.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/math9172082/s1.

Author Contributions

Conceptualization, L.-M.S.-R. and S.M.-L.; methodology, L.-M.S.-R., S.M.-L., J.-A.M.-F. and M.-D.R.; validation, L.-M.S.-R., S.M.-L., J.-A.M.-F. and M.-D.R.; investigation, L.-M.S.-R., S.M.-L., J.-A.M.-F. and M.-D.R.; data curation, L.-M.S.-R., S.M.-L. and M.-D.R.; writing—original draft preparation, L.-M.S.-R., S.M.-L. and M.-D.R.; writing—review and editing, L.-M.S.-R., S.M.-L., J.-A.M.-F. and M.-D.R. All authors have read and agreed to the published version of the manuscript.

Funding

Research developed within a Project funded by Instituto de Ciencias de la Educación (ICE) through the Educational Innovation and Improvement Project (PIME) A + D 2019, number 1699-B, at Universitat Politècnica de València (UPV), Spain.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This experience has been developed within the GRoup of Innovative Methodologies for Assessment in Engineering Education GRIM4E, of Universitat Politècnica de València (Valencia, Spain).

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study, the collection, analyses, or interpretation of data, the writing of the manuscript, or in the decision to publish the results.

References

  1. Gonczi, A. Competency-based learning: A dubious past—An assured future? In Understanding Learning at Work; Boud, D., Garrick, J., Eds.; Routledge: London, UK, 1999; pp. 180–197. [Google Scholar]
  2. Westera, W. Competences in education: A confusion of tongues. J. Curric. Stud. 2001, 33, 75–88. [Google Scholar] [CrossRef]
  3. Arguelles, A.; Gonczi, A. Competency Based Education and Training: A World Perspective; Limusa: Balderas, Mexico, 2000. [Google Scholar]
  4. Winterton, J.; Delamare-Le Deist, F.; Stringfellow, E. Typology of Knowledge, Skills and Competencies: Clarification of the Concept and Prototype; European Centre for the Development of Vocational Training (Cedefop): Thessaloniki, Greece, 2006. [Google Scholar]
  5. Edwards, M.; Sánchez Ruiz, L.M.; Sánchez-Díaz, C. Achieving Competence-Based Curriculum in Engineering Education in Spain. Proc. IEEE 2009, 97, 1727–1736. [Google Scholar] [CrossRef]
  6. Kim, M.K.; Kim, S.M.; Khera, O.; Getman, J. The experience of three flipped classrooms in an urban university: An exploration of design principles. Internet High. Educ. 2014, 22, 37–50. [Google Scholar] [CrossRef]
  7. Hughes, H. Introduction to flipping the college classroom. In Proceedings of the World Conference on Educational Multimedia, Hypermedia and Telecommunications 2012, Chesapeake, VA, USA, 26 June 2012; pp. 2434–2438. [Google Scholar]
  8. Pardo, A.; Pérez-Sanagustín, M.; Parada, H.A.; Leony, D. Flip with care. In Proceedings of the SoLAR Southern Flare Conference, Sidney, Australia, 29–30 November 2012. [Google Scholar]
  9. Chen, Y.; Wang, Y.; Kinshuk; Chen, N.S. Is FLIP enough? Or should we use the FLIPPED model instead? Comput. Educ. 2014, 79, 16–27. [Google Scholar] [CrossRef] [Green Version]
  10. Forcada, N.; Casals, M.; Roca, X.; Gangolells, M. Students’ Perceptions and Performance with Traditional vs. Blended Learning Methods in an Industrial Plants Course. Int. J. Eng. Educ. 2007, 23, 1199–1209. [Google Scholar]
  11. Reeves, T.C.; Reeves, P.M. Designing online and blended learning. In University Teaching in Focus: A Learning-Centred Approach; Hunt, L., Chalmers, D., Eds.; Routledge: New York, NY, USA, 2012; pp. 112–127. [Google Scholar]
  12. Strayer, J.F. How learning in an inverted classroom influences cooperation, innovation and task orientation. Learn. Environ. Res. 2012, 15, 171–193. [Google Scholar] [CrossRef]
  13. Beck, L.; Chizhik, A. Cooperative learning instructional methods for CS1: Design, implementation, and evaluation. ACM Trans. Comput. Educ. 2013, 13, 10. [Google Scholar] [CrossRef] [Green Version]
  14. Torrisi-Steele, G.; Drew, S. The literature landscape of blended learning in higher education: The need for better understanding of academic blended practice. Int. J. Acad. Dev. 2013, 18, 371–383. [Google Scholar] [CrossRef] [Green Version]
  15. Benta, D.; Bologa, G.; Dzitac, S.; Dzitac, I. University Level Learning and Teaching via E-Learning Platforms. Procedia Comput. Sci. 2015, 55, 1366–1373. [Google Scholar] [CrossRef] [Green Version]
  16. Moraño-Fernández, J.A.; Moll-López, S.; Sánchez-Ruiz, L.M.; Vega-Fleitas, E.; López-Alfonso, S.; Puchalt-López, M. Micro-Flip Teaching with e-learning Resources in Aerospace Engineering Mathematics: A Case Study. In Proceedings of the World Congress on Engineering and Computer Science 2019, WCECS 2019, San Francisco, CA, USA, 22–24 October 2019. [Google Scholar]
  17. Sun, L.; Tang, Y.; Zuo, W. Coronavirus pushes education online. Nat. Mater. 2020, 19, 687. [Google Scholar] [CrossRef]
  18. Aristovnik, A.; Keržič, D.; Ravšelj, D.; Tomaževič, N.; Umek, L. Impacts of the COVID-19 Pandemic on Life of Higher Education Students: A Global Perspective. Sustainability 2020, 12, 8438. [Google Scholar] [CrossRef]
  19. Sá, M.J.; Serpa, S. The global crisis brought about by SARS-CoV-2 and its impacts on education: An overview of the Portuguese panorama. Sci. Insights Educ. Front. 2020, 5, 525–530. [Google Scholar] [CrossRef] [Green Version]
  20. Mpungose, C.B. Emergent transition from face-to-face to online learning in a South African University in the context of the Coronavirus pandemic. Humanit. Soc. Sci. Commun. 2020, 7, 113. [Google Scholar] [CrossRef]
  21. Biswas, B.; Roy, S.K.; Roy, F. Students Perception of Mobile Learning during COVID-19 in Bangladesh: University Student Perspective. Aquademia 2020, 4, ep20023. [Google Scholar] [CrossRef]
  22. Karalis, T.; Raikou, N. Teaching at the Times of COVID-19: Inferences and Implications for Higher Education Pedagogy. Int. J. Acad. Res. Bus. Soc. Sci. 2020, 10, 479–493. [Google Scholar] [CrossRef]
  23. Rapanta, C.; Botturi, L.; Goodyear, P.; Guàrdia, L.; Koole, M. Online University Teaching during and after the Covid-19 Crisis: Refocusing Teacher Presence and Learning Activity. Postdigit. Sci. Educ. 2020, 2, 923–945. [Google Scholar] [CrossRef]
  24. Sánchez-Ruiz, L.M.; Moll-López, S.; Moraño-Fernández, J.A.; Llobregat-Gómez, N. B-Learning and Technology: Enablers for University Education Resilience. An Experience Case under COVID-19 in Spain. Sustainability 2021, 13, 3532. [Google Scholar] [CrossRef]
  25. York, T.Y.; Gibson, C.; Rankin, S. Defining and Measuring Academic Success. Res. Eval. 2015, 20, 5. [Google Scholar]
  26. Baughman, J.; Brumm, T.; Mickelson, S. Student Professional Development: Competency-Based Learning and Assessment. J. Technol. Stud. 2012, 38, 115–127. [Google Scholar] [CrossRef] [Green Version]
  27. Gervais, J. The operational definition of competency-based education. J. Competency-Based Educ. 2016, 1, 98–106. [Google Scholar] [CrossRef]
  28. Guerrero-Roldán, A.E.; Noguera, I. A model for aligning assessment with competences and learning activities in online courses. Internet High. Educ. 2018, 38, 36–46. [Google Scholar] [CrossRef]
  29. González Segura, C.M.; García García, M.; Menéndez-Domínguez, V.H.; Sánchez Arias, V.G. Computational Assistant for the Assessment University Competencies in B-Learning Environments. IEEE-RITA 2020, 15, 299–306. [Google Scholar]
  30. Aina, J.K.; Adedo, G.A. Correlation between continuous assessment (CA) and Students’ performance in physics. J. Educ. Pract. 2013, 4, 6–9. [Google Scholar]
  31. Coll, C.; Rochera, M.; Mayordomo, R.M.; Naranjo, M. Continuous assessment and support for learning: An experience in educational innovation with ICT support in higher education. Electr. J. Res. Educ. Psychol. 2007, 5, 783–804. [Google Scholar]
  32. Combrinck, M.; Hatch, M. Students’ Experiences of a Continuous Assessment Approach at a Higher Education Institution. J. Soc. Sci. 2012, 33, 81–89. [Google Scholar] [CrossRef]
  33. Richardson, J.T.E. Coursework versus examinations in end-of-module assessment: A literature review. Assess. Eval. High. Educ. 2015, 40, 439–455. [Google Scholar] [CrossRef]
  34. Bearman, M.; Dawson, P.; Boud, D.; Bennett, S.; Hall, M.; Molloy, E. Support for assessment practice: Developing the Assessment Design Decisions Framework. Teach. High. Educ. 2016, 21, 545–556. [Google Scholar] [CrossRef]
  35. Elliott, J.G. Dynamic Assessment in Educational Settings: Realising Potential. Educ. Rev. 2003, 55, 15–32. [Google Scholar] [CrossRef]
  36. Elliott, J.G.; Resing, W.C.M.; Beckmann, J.F. Dynamic assessment: A case of unfulfilled potential? Educ. Rev. 2018, 70, 7–17. [Google Scholar] [CrossRef] [Green Version]
  37. Veerbeek, J.; Verhaegh, J.; Elliott, G.; Resing, W.C. Process-oriented Measurement Using Electronic Tangibles. J. Educ. Learn. 2017, 6, 155. [Google Scholar] [CrossRef] [Green Version]
  38. Stad, F.E.; Wiedl, K.H.; Vogelaar, B.; Bakker, M.; Resing, W.C.M. The role of cognitive flexibility in young children’s potential for learning under dynamic testing conditions. Eur. J. Psychol. Educ. 2019, 34, 123–146. [Google Scholar] [CrossRef] [Green Version]
  39. Biggs, J. Constructive alignment in university teaching. HERDSA Rev. High. Educ. 2014, 1, 5–22. [Google Scholar]
  40. Ghent University. Constructive Alignment: What Is It and Why Is It So Important? Available online: https://onderwijstips.ugent.be/en/tips/opleidingsonderdeel-samenstellen/ (accessed on 21 July 2021).
  41. Gibbs, G. How assessment frames student learning. In Innovative Assessment in Higher Education; Bryan, C., Clegg, K., Eds.; Routledge: London, UK, 2006; pp. 23–36. [Google Scholar]
  42. Daniel, I.O.A.; Island, V. Comparison of Continuous Assessment and Examination Scores in an English Speech Work Class. Int. J. Appl. Linguist. Engl. Lit. 2012, 1, 92–98. [Google Scholar] [CrossRef] [Green Version]
  43. de Sande, J.C.G.; Arriero, L.; Benavente, C.; Fraile, R.; Godino-Llorente, J.I.; Gutiérrez, J.; Osés, D.; Osma-Ruiz, V. A Case Study: Final Exam v/s Continuous Assessment Marks for Electrical and Electronic Engineering Students. In Proceedings of the CD-ROM International Conference of Education, Research and Innovation, ICERI, Madrid, Spain, 17–19 November 2008. [Google Scholar]
  44. Mínguez, F.; Moraño, J.A.; Roselló, M.D.; Sánchez Ruiz, L.M. Towards a Continuous Assessment of Mathematical Competencies in a First Year of Aerospace Engineering. In Proceedings of the 42nd SEFI Annual Conference, Birmigham, UK, 15–19 September 2014. [Google Scholar]
  45. Sánchez Ruiz, L.M.; Blanes Zamora, S.; Capilla Roma, M.T.; García Mora, M.B.; Llobregat Gómez, N.; Moll López, S.E.; Moraño Fernández, J.M.; Roselló Ferragud, M.D. Evaluación continua, clase inversa y cooperación activa en Matemáticas para ingenieros. Pi-InnovaMath 2018, 1, 1–7. [Google Scholar] [CrossRef]
  46. Terenzini, P.T. Assessment with open eyes: Pitfalls in studying student outcomes. J. High. Educ. 1989, 60, 644–664. [Google Scholar] [CrossRef]
  47. Stojadinović, Z.; Bozić, M.; Nadaźdi, A. Development and Implementation of Evaluation Framework for Quality Enhancement of Outcome-Based Curriculum. Int. J. Eng. Ed. 2021, 37, 397–408. [Google Scholar]
  48. Ezewu, E.E.; Okoye, N.N. Principles and Practice of Continuous Assessment; Evans Publishers: Ibadan, Nigeria, 1986. [Google Scholar]
  49. Bjælde, O.E.; Jørgensen, T.H.; Lindberg, A.B. Continuous assessment in higher education in Denmark: Early experiences from two science courses. Dan. Univ. Tidsskr. 2017, 12, 1–19. [Google Scholar]
  50. Velasco-Martínez, L.C.; Tójar Hurtado, J.C. Uso de rúbricas en educación superior y evaluación de competencias. Profr. Rev. Curric. Y. Form. Profr. 2018, 22, 183–208. [Google Scholar] [CrossRef]
  51. Nayak, A.; Umadevi, F.M.; Preeti, T. Rubrics based continuous assessment for effective learning of digital electronics laboratory course. In Proceedings of the 4th IEEE International Conference on MOOCs, Innovation and Technology in Education, MITE, Madurai, India, 9–10 December 2016; pp. 290–295. [Google Scholar]
  52. Niss, M.A. Mathematical competencies and the learning of mathematics: The Danish KOM project. In Proceedings of the 3rd Mediterranean Conference on Mathematics Education, Athens, Greece, 3–5 January 2003; Hellenic Mathematical Society: Athens, Greece, 2003; pp. 116–124. [Google Scholar]
  53. Liakos, Y.; Rogovchenko, S.; Rogovchenko, Y. A New Tool for the Assessment of the Development of Students’ Mathematical Competencies. In Proceedings of the 19th SEFI Mathematics Working Group Seminar on Mathematics in Engineering Education. The Department of Physics and Mathematics, Coimbra Polytechnic-ISEC, Coimbra, Portugal, 26–29 June 2018; pp. 158–163. [Google Scholar]
  54. Tomé Fernández, M. Attitudes toward Inclusive Education and Practical Consequences in Final Year Students of Education Degrees. Procedia Soc. Behav. Sci. 2017, 237, 1184–1188. [Google Scholar] [CrossRef]
  55. Sillat, L.H.; Tammets, K.; Laanpere, M. Digital Competence Assessment Methods in Higher Education: A Systematic Literature Review. Educ. Sci. 2021, 11, 402. [Google Scholar] [CrossRef]
  56. R Software Version 4.1.1 (Kick Things) Released on 2021-08-10. The R Project for Statistical Computing. Available online: https://www.r-project.org/ (accessed on 21 August 2021).
  57. Excel Software, Included in Office Professional Plus 2019 Package. Microsoft. Available online: https://www.microsoft.com/es-es/microsoft-365/get-started-with-office-2019 (accessed on 21 August 2021).
  58. Maycock, K.W.; Lambert, J.; Bane, D. Flipping learning not just content: A 4-year action research study investigating the appropriate level of flipped learning. J. Comput. Assist. Learn. 2018, 34, 661–672. [Google Scholar] [CrossRef]
  59. García-Peñalvo, F.G.; Fidalgo-Blanco, A.; Sein-Echaluce, M.L.; Conde, M.A. Cooperative micro flip teaching. In Proceedings of the International Conference on Learning and Collaboration Technologies (LCT), Toronto, ON, Canada, 17–22 July 2016; LNCS. Zaphiris, P., Ioannou, A., Eds.; Springer: Cham, Switzerland, 2016; Volume 9753, pp. 14–24. [Google Scholar]
  60. Sánchez Ruiz, L.M.; Llobregat Gómez, N.; Moll López, S.E.; Moraño Fernández, J.A.; Roselló Ferragud, M.D. Collaborative learning in Mathematics for Aerospace Engineering. In Proceedings of the 45th SEFI Annual Conference: Education Excellence for Sustainability, Azores, Portugal, 18–21 September 2017; European Society for Engineering Education SEFI: Azores, Portugal, 2017; pp. 1526–1533. [Google Scholar]
  61. Prensky, M.R. Teaching Digital Natives: Partnering for Real Learning; Corwin Press: Thousand Oaks, CA, USA, 2010. [Google Scholar]
  62. Llobregat-Gómez, N.; Sánchez-Ruiz, L.M. Digital citizen in a resilience society. In Proceedings of the 2015 International Conference on Interactive Collaborative Learning (ICL), Florence, Italy, 20–24 September 2015; pp. 1026–1030. [Google Scholar]
  63. Llobregat-Gómez, N.; Sánchez-Ruiz, L.M. El Emergente Ciudadano Digital. In TICAI 2015: TICs Para el Aprendizaje de la Ingeniería; Gericota, M.G., Santos Gago, J.M., Eds.; IEEE, Sociedad de Educación: Capítulos Español y Portugués; Universidade de Vigo: Vigo, Spain, 2016; pp. 9–14. [Google Scholar]
  64. Sánchez-Ruiz, L.M.; Llobregat-Gómez, N. Assessment of Zgen students’ competencies by digital immigrants. In Proceedings of the 10th International Symposium on Innovation and Technology (ISIT), Cusco, Perú, 22–24 July 2019; pp. 41–44. [Google Scholar]
  65. Alpers, B. Mathematics as a Service Subject at the Tertiary Level A State-of-the-Art Report for the Mathematics Interest Group; European Society for Engineering Education (SEFI): Brussels, Belgium, 2020. [Google Scholar]
  66. Mathematics Working Group SEFI. A Framework for Mathematics Curricula in Engineering Education, 3rd ed.; Alpers, B.A., Demlova, M., Fant, C.H., Gustafsson, T., Lawson, D., Mustoe, L., Velichova, D., Eds.; European Society for Engineering Education (SEFI): Brussels, Belgium; Available online: http://sefi.htw-aalen.de/Curriculum (accessed on 21 July 2021).
  67. Kálmán, A. The meaning and importance of Thomas Kuhn’s concept of ‘paradigm shift’. How does it apply in education? Opus Educ. 2016, 3, 96–107. [Google Scholar]
Figure 1. DCD assessment chart with chains of topics distributed in LP and TP, with its corresponding weights, leading to a final grade (FG) of an annual subject run from September to June.
Figure 1. DCD assessment chart with chains of topics distributed in LP and TP, with its corresponding weights, leading to a final grade (FG) of an annual subject run from September to June.
Mathematics 09 02082 g001
Figure 2. Student performance under DCD assessment during 2010–2021.
Figure 2. Student performance under DCD assessment during 2010–2021.
Mathematics 09 02082 g002
Figure 3. Distribution of students based on the final grade obtained with DCDA.
Figure 3. Distribution of students based on the final grade obtained with DCDA.
Mathematics 09 02082 g003
Figure 4. Students’ perception of the fairness of their awarded final grade.
Figure 4. Students’ perception of the fairness of their awarded final grade.
Mathematics 09 02082 g004
Figure 5. Students’ perception of the fairness of their grades within grading slots.
Figure 5. Students’ perception of the fairness of their grades within grading slots.
Mathematics 09 02082 g005
Figure 6. Preference on the type of assessment methodology to be applied.
Figure 6. Preference on the type of assessment methodology to be applied.
Mathematics 09 02082 g006
Figure 7. Students’ opinion of TP4 and its role to show mastery in TP1 and TP3.
Figure 7. Students’ opinion of TP4 and its role to show mastery in TP1 and TP3.
Mathematics 09 02082 g007
Figure 8. Perception of the effect of demonstrating previous competencies in TP4 on the final grade.
Figure 8. Perception of the effect of demonstrating previous competencies in TP4 on the final grade.
Mathematics 09 02082 g008
Figure 9. Students’ point of view on the usefulness of blended assessment.
Figure 9. Students’ point of view on the usefulness of blended assessment.
Mathematics 09 02082 g009
Figure 10. Students’ opinion of their mathematical competencies improvement.
Figure 10. Students’ opinion of their mathematical competencies improvement.
Mathematics 09 02082 g010
Figure 11. Students’ perception of their Mathematical competencies improvement by grading slots.
Figure 11. Students’ perception of their Mathematical competencies improvement by grading slots.
Mathematics 09 02082 g011
Figure 12. General opinion of the assessment system applied.
Figure 12. General opinion of the assessment system applied.
Mathematics 09 02082 g012
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sánchez-Ruiz, L.-M.; Moll-López, S.; Moraño-Fernández, J.-A.; Roselló, M.-D. Dynamical Continuous Discrete Assessment of Competencies Achievement: An Approach to Continuous Assessment. Mathematics 2021, 9, 2082. https://doi.org/10.3390/math9172082

AMA Style

Sánchez-Ruiz L-M, Moll-López S, Moraño-Fernández J-A, Roselló M-D. Dynamical Continuous Discrete Assessment of Competencies Achievement: An Approach to Continuous Assessment. Mathematics. 2021; 9(17):2082. https://doi.org/10.3390/math9172082

Chicago/Turabian Style

Sánchez-Ruiz, Luis-M., Santiago Moll-López, Jose-Antonio Moraño-Fernández, and María-Dolores Roselló. 2021. "Dynamical Continuous Discrete Assessment of Competencies Achievement: An Approach to Continuous Assessment" Mathematics 9, no. 17: 2082. https://doi.org/10.3390/math9172082

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop