Next Article in Journal
The Research of Educational Innovation: Perspective and Strategies
Next Article in Special Issue
Crisis Management, School Leadership in Disruptive Times and the Recovery of Schools in the Post COVID-19 Era: A Systematic Literature Review
Previous Article in Journal
The Pedagogical Use of Gamification in English Vocabulary Training and Learning in Higher Education
Previous Article in Special Issue
Ascertaining the Online Learning Behaviors and Formative Assessments Affecting Students’ Academic Performance during the COVID-19 Pandemic: A Case Study of a Computer Science Course
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Online Testing as a Means of Enhancing Students’ Academic Motivation during the Coronavirus Pandemic

by
Stanislava Stoyanova
1 and
Vaitsa Giannouli
2,*
1
Department of Psychology, South-West University “Neofit Rilski”, 2700 Blagoevgrad, Bulgaria
2
School of Humanities, Social and Educational Sciences, Department of Social & Behavioral Sciences, European University Cyprus, Nicosia 2404, Cyprus
*
Author to whom correspondence should be addressed.
Educ. Sci. 2023, 13(1), 25; https://doi.org/10.3390/educsci13010025
Submission received: 18 November 2022 / Revised: 17 December 2022 / Accepted: 21 December 2022 / Published: 26 December 2022

Abstract

:
Although it is widely believed that online testing may be applied as a way of enhancing academic motivation, thus far we know little about this topic for Bulgarian students. For this purpose, we conducted research during the COVID-19 pandemic focusing only on university students (n = 80; 74 women, 63 full-time, 17 part-time, 41 bachelor students, 39 master’s students). Participants studied online and filled in several tests online as a part of their interim control. Nine tests were created measuring knowledge and skills related to psychological measurements. Each student was provided the opportunity to respond an unlimited number of times to each test and, therefore, data were collected from 1226 testing procedures that permitted the comparison of 911 responses from full-time students with 315 responses from part-time students. Findings support the conclusion that the highest academic motivation was manifested in the best students’ performance, because the students with high academic motivation had the highest test scores/attempts in online testing. The lowest academic motivation was expressed in the least efforts put into the learning process, because the students with the lowest motivation made the least number of attempts to respond to a test, compared with the students with medium and high academic motivation.

1. Introduction

Finding possibilities to enhance students’ academic motivation is important for society striving to facilitate the transfer of social experience, accumulated knowledge, and scientific achievements from one generation to the next generations. Different approaches and techniques are applied by teachers in learning processes, such as gamification, multisensory presentation of information, role-playing, project work, etc., to stimulate learning motivation and improve students’ performance. The change of organization and implementation of educational processes in electronic environments during the COVID-19 pandemic produced the need to search for ways to stimulate learning motivation and academic performance through the mediation of digital technologies. One study revealed that university students demonstrated a higher motivation to use the online learning system, a higher need for achievement in online learning, and higher relatedness (i.e., a higher need to collaborate and communicate with their lecturers or their fellow students) than their lecturers [1]. A study among professional psychologists from 29 countries revealed that there was a high appreciation of tests in general, and Bulgaria was among the countries where professional psychologists highly valued Internet testing [2]. On the one hand, teachers, parents, and students consider that assessment and test results influence learning motivation [3]. The students’ scores on online quizzes correlate positively with their motivation, self-efficacy, and active learning strategies [4]. On the other hand, test-taking motivation is related to test performance and achievement test results [5] that may be partly due to the efforts that are expended, the interest that is manifested, the importance that is granted to the usefulness of the course content, and the expectancies related to its necessity for further occupational performance. That is why the present study focused on online testing as a means to stimulate learning motivation and better academic performance. There are several reasons to state that online testing could be one of the means for enhancing students’ academic motivation.
One of the most effective ways for students to stay motivated is to set effective academic goals for themselves and to complete tasks [6]. Successfully passing an exam and achieving passing grades in online learning are important tasks to students with all types of academic motivation—low, medium, and high [4]. Most test takers highly rate the subjective value of performing well in the assessment [7]. Responding to the tests related to the course content is one of the students’ tasks in the process of learning related to the goals that the students set for themselves: to perform successfully, to pass the exam, and to graduate. Goals can be used to maintain individual motivation, and the most effective academic goals are specific, proximal, moderately challenging or moderately difficult, meaningful, and achievable [6]. Planning how to reach a goal and following the steps of this plan also keeps students motivated [6]. Online testing is flexible and permits good planning [8].
The student may decide what mark they need to achieve at the next assessment in the course, and this specific proximal goal may be motivating for learning during the days preceding the evaluation [6]. Online testing measures learner’s progress and continuous testing during the academic year stimulates the student to set several individual specific goals, both to improve their own knowledge and skills, and to pass each test of a sequence of tests.
When one achieves a goal that is too easy or that is not challenging, this person does not feel satisfied [6]. Students’ satisfaction with the educational process may stimulate their activity and learning motivation. An extremely difficult and challenging goal most probably is unrealistic and may make the student feel they have failed [6]. The high achievers are more persistent in taking tests than low achievers who may become demotivated by constant evidence of their low achievement [9]. A moderately challenging and achievable goal, when realized, provides the student the sense of accomplishment and success [6]. That is why it seems that using tests with medium difficulty is the most appropriate way to stimulate students’ learning motivation. If students succeed at one point in their learning, this may encourage them to continue expending effort to learn to achieve further success [6]. A test with medium difficulty also seems to have other advantages. When using a norm-referenced interpretation, and if the majority of the test items are moderately difficult, the test scores often vary greatly (higher standard deviation), and the test will be able to differentiate well among the students [6]. That is why, in the present study, the tests used were mainly those of medium difficulty (see Table 1).
In traditional face-to-face learning in a mixed-ability group of fast and slow learners, the fast learners get bored easily and the slow learners may become easily demotivated and demonstrate low performance [10], but in e-learning and online testing, when everyone has the possibility to learn the material and to be assessed at their own tempo, these difficulties should be easily overcome. Both slow and fast learners have negative attitudes towards the separate classes for slow learners and fast learners, considering that this approach may affect the unity of students; in addition, the slow learners disagree that their classroom learning, or their use of active learning methods, may be improved in this manner [11]. E-learning allows students to learn the stored learning content material at their tempo, and online testing, providing students ample time for multiple attempts to respond to a test, should facilitate acquisition of material without the self-perception of lagging behind the general pace of work.
Different types of tests are used in the educational process, and all of them could be applied online. The most commonly used tests in education are diagnostic tests and achievement tests [12]. Students sometimes experience difficulties in the process of learning. Diagnostic tests are mainly of low or medium difficulty and they are used in the educational process to detect students’ learning difficulties by means of identifying their most common types of errors [12]. Achievement tests could be created by teachers (without established item properties, reliability, validity, and norms) or standardized tests; both are used to determine students’ achievements, progress in the process of learning, strengths and weaknesses, and the effectiveness of teaching, and may motivate the students towards further learning [12].
Formative assessment during the process of learning is useful for providing feedback about the student’s progress [6]. Feedback focused on the task, or task-involving feedback, is associated with greater interest and effort than feedback that is ego-involving [9]. Because positive feedback has a positive effect on motivation to learn and develop [13], the feedback that was provided to the students in the present study was phrased positively. Students were encouraged to learn and relearn the course content, and to reformulate their responses to exam questions until they were satisfied with their performance and grade. If students can compare their current and past achievements, and realize that they have made some progress, this may help them stay motivated for learning [6]. It has been established that providing feedback about test results, plus providing the opportunity to correct one’s own answers, diminishes test anxiety [14]; however, providing constant feedback about the test results during the process of test administration, without any possibility for correction of one’s own answers, increases test anxiety [15]. That is why, if the students have multiple attempts to respond to a test and are provided feedback after each attempt (at least about their score), this procedure should diminish their test anxiety and stimulate their motivation to learn through testing. The same approach was applied in the present study.
Dissatisfaction with e-learning has been attributed mainly to the lack of adequate interactions between the lecturer and the learners, on the one hand, and to the learners themselves, on the other hand [16]. The feedback that is provided by the lecturer, in the case of assessment by the teacher, and the feedback provided by the fellow students, in the case of anonymous peer assessment, is a type of communication during e-learning that, if objective and benevolent, may provide motivation for further improvement.
Keeping test grades confidential may prevent demotivating the students who do not perform well [6] by avoiding shame, i.e., additional negative emotions related to learning. Keeping test grades confidential may also prevent demotivating the students who perform well, because they will not be envied by their fellow students. That is why, in the present study, each student was informed only about their own test result, not about the other students’ achievements.
Having plenty of time to respond to a test may diminish the students’ anxiety level [6] and avoiding a time limit as a source of frustration may stimulate the students’ motivation to perform well. That is why testing with a reasonable time for answering or testing without cancellation of the process of answering after a particular duration has elapsed seems more appropriate for stimulation of learning motivation than testing within time frames that are too short. This approach was applied in the present study.
Summative assessment at the end of the course aims to determine the students’ levels of competency with the course content [6]. A good test performance should reflect the level of learning the course content, but good test performance should not be more important than learning course content that could be useful for future professional realization [6]. The items of the tests in the present study were formulated with a focus on the practical implication of the acquired knowledge and skills.
Testing face to face permits the observation of students for signs, such as concentration, engagement, and involvement in the activities, that indicate that the students are motivated [6]. In addition, moderate positive correlation has been established between class attendance in courses and final grades in these courses [6], which may be due to careful listening to the lecturer’s explanations; repeating the course content, perceived by different modalities—auditory and visual—more frequently; or to the students’ motivation, if the best motivated students attend more classes and prepare better for their exams. E-learning permits repetition of the stored course content several times and different modalities could be used for presentation of course content that should facilitate the learning process.
Testing enhances long-term retention of studied knowledge, which is referred to as the testing effect or test-enhanced learning [17]. Testing also facilitates knowledge application for solving problems and it promotes memorization of untested knowledge [17]. Assessment may serve as an opportunity for student learning, because when the students respond to the assessment tasks, they mentally represent the content, they make connections between different concepts, and they transfer their knowledge to new situations [18]. Testing effect is persistent with different administration modes (paper-and-pen or online), different test formats, different educational levels (elementary school, high school, university, college), for both male and female students, across many academic subject categories [17]. Testing effect is higher when corrective feedback is offered, when the number of test repetitions is higher, or when the educational period (treatment duration) is longer [17].
Continuous assessment aims to improve teaching and learning, and to engage, encourage, and motivate the students and make the assessment a positive experience [19]. Assessment should motivate students to show what they can do and to learn further so that they can do more [19].
Students’ engagement is linked to their satisfaction and the quality of their experience [20]. Most test takers highly rate test utility [7]. Moreover, it has been found that interactive tests are among the most preferred formats of electronic learning content [21]. Because it is appealing for students, online testing may be applied as a way of enhancing academic motivation.
A study among university students found that their attitudes towards computer-based testing were positive and, based on the findings of this study, computer-based testing was recommended as the proper test mode for intrinsically motivated students with autonomous regulation [22]. The students, confident of success, enjoy the tests [9]; for this reason, providing the possibility of providing multiple responses in the same test, until the student is satisfied with their own performance, seems important for stimulation of academic motivation. Frequent participation in testing may indicate increased students’ learning motivation due to acquiring experience in working with tests and confidence in the strategy of performing tasks, confidence in time allocation, and confidence in the results from assessment [3]. Test-taking motivation is associated positively with test performance, both in low-stakes or high-stakes testing contexts [23].
The test results from computer testing in e-learning may be used for improvement of the existing educational activities, adding some new educational activities, or compensation of some omissions in the students’ knowledge and skills [24]. Online assessment, and especially online quizzes and multiple-choice questions, promote self-directed learning [25]. Online testing stimulates the process of self-education because of the feedback provided [26]. Testing can enhance long-term learning and can encourage further learning, partly because of repeated retrieval and feedback [27]. Feedback improves performance when feedback indicates the gap between current and desired performance on a task, encourages motivation to complete this particular task, and does not focus on personality; continuous feedback may maintain motivation [28]. That is why frequent online testing with continuous feedback during the educational process may enhance learning motivation.
More than a half of the students in one study considered that e-assessment activities, such as assignments/projects/exams, captured their interest, enhanced their learning experience, stimulated them to acquire some new knowledge and skills, and encouraged them to be more self-confident and to believe in their own success; therefore, students’ motivation strongly correlated with such aspects of e-assessments as e-grade checking and feedback [29]. The majority of university students in an obligatory statistics course valued highly and positively graded frequent online assessment with immediately received feedback about their results, and considered frequent graded online assessment to be a study motivator [30].
If the students received higher grades for frequent assessment, they valued highly frequent online assessment, they experienced lower stress and more confidence during frequent assessment, and they had higher intrinsic motivation [30]. Students achieve higher test performance when they invest more effort and experience less worry during testing [31]. The students with lower grades valued frequent assessment less and had lower intrinsic motivation only when they perceived assessment as stressful and diminishing their self-confidence [30]. Communicating to students the purpose and benefits of frequent assessments as a learning tool could mitigate their lower self-confidence and diminish stress from assessment and low grades [30]. If the teachers explain to the students that they may learn from assessment using feedback for own learning progress, and that the grades reflect the students’ effort and current understanding rather than their personal ability, this may reduce the students’ negative emotions during assessment and increase their effort in studying [30]. That is why, in the current study, the students received an explanation that the test results were informative about the student’s advancement in learning, and the students could take each test as many times as they liked, because this was also a form of learning the study material, and they would receive the highest score achieved by them for each test. This instruction aimed to reduce the students’ possible negative experiences from frequent test examination, to stimulate the students to reflect on the feedback provided regarding tests results as a learning tool, and to perceive frequent assessments as a valuable learning opportunity.
Students achieve more when frequent testing is implemented. This may be due to effective feedback and teaching that is adjusted to the students’ difficulties, because the frequent tests may act as extrinsic motivators for students, or due to an improvement in students’ retention (testing effect or test-enhanced learning) because of repeated exposure to the material, through study and testing [32]. However, increasing the test frequency may increase both stress and test anxiety, which may lead to decreases in student’s achievement, whereas increasing the test frequency further may familiarize the students with conducting tests and thereby decrease their anxiety [32].
The students who took at least one test during a 15-week term had higher exam grades than the students who did not take any tests, and better students’ performance was associated with more frequent testing, but the amount of improvement in students’ achievement was not equal after each further testing [33]. The students evaluated weekly had statistically significantly higher test scores than the students tested biweekly, and low- and middle-achieving students had higher gains when tested weekly [34]. On their final exam, high school students who received a 10-min quiz daily for 6 weeks significantly outperformed students who received a quiz weekly for 6 weeks [34]. Continuous weekly e-assessment introduced in a virtual learning environment module led to a greater increase in students’ learning activity than in that module the previous year, as well as compared to the same students’ learning activity in two other e-learning modules [20].
Online testing facilitates better students’ achievements than traditional testing [35]. Online testing has been recommended for older students—upper elementary and above [36]—such as the university students in the current study. The students who learned online received higher exam test scores than the students who learned traditionally face-to-face, and most instructors evaluated online learning as boosting students’ learning motivation, engagement, and interest [37]. Adaptive online quizzes increase students’ motivation and engagement [38]. Testing stimulates learners to commit more efforts in the learning process, boosting learning motivation, and it moderately raises the students’ academic achievement [17]. Thus, given the growing importance of e-learning and e-testing in educational settings [39], and due to the remodeling of education with online solutions (that were compulsory, especially during the COVID-19 pandemic), the aim of this study was to examine university students’ motivation in a neglected population in the literature, that is, Bulgarian university students, based on their performance in online testing assessment attempts.

2. Materials and Methods

2.1. Procedure

The study was conducted from 30 September 2021 to 22 July 2022 during the COVID-19 pandemic. The process of teaching and learning was held only online from 14 October 2021 to 14 March 2022. During the other period of research, the process of university study was hybrid. However, during the whole period of study, each participant responded only online to nine tests measuring knowledge and skills related to psychological measurements. The participating students were provided the opportunity to respond to each test an unlimited number of times until satisfied with the test score. The deadline for responding to the online tests was fixed as the day before the exam, and if they did not respond to the online tests before the date of their final exam, then on the date of their final exam, they would be required to respond to all the other online tests, together with their final exam test (also administered online during the exclusive e-learning time, but administered face to face during the hybrid learning period).
The participating students responded to 9 tests online a total of 1226 times; these tests evaluated knowledge and skills related to psychological measurements. Almost half of these responses to the tests (N = 612) were provided more than one month before the final exam; approximately the same number of answers (N = 564) were provided less than one month before the exam, and the least number of answers were provided during the final exam (N = 50).

2.2. Participants

A purposeful sampling was used and only the university students from Bulgaria, who studied online and filled in several tests online as a part of their interim control, participated. The choice of country was based on the scarcity of published studies in the literature [24] regarding this topic in Bulgaria. In total, 80 university students participated (74 women and 6 men). They studied full-time (N = 63) or part-time (N = 17), in bachelor’s degrees (N = 41) and in master’s degrees (N = 39). Because each student was provided the opportunity to respond an unlimited number of times to each test, data were collected from 1226 testing procedures, which permitted the comparison of 911 responses from full-time students with 315 answers from part-time students; these included 662 responses from students in bachelor’s degrees, 564 responses of students in master’s degrees, 69 responses from male students, and 1157 responses from female students. All participating students were enrolled in university degree programs of psychology, and they had successfully completed the same university courses for their educational degree as a part of their educational process before starting the course on Psychological Measurements, which required that all participants possessed a minimum baseline of knowledge and skills.

2.3. Instruments

Nine tests were created measuring knowledge and skills related to psychological measurements; the majority of test items corresponded to levels knowledge, comprehension, and application from the taxonomy of cognitive educational objectives by Bloom et al. (1956) [40]. Some of the psychometric properties of these tests are described in Table 1.
These tests measuring knowledge and skills related to psychological measurements were easy rather than difficult, because their test difficulty, defined as percentage of the mean test score divided by the maximum possible test score [41], was more than 50% (see Table 1). The test scores that were achieved varied from 0 to the maximum possible test score for each test (see Table 1). The number of attempts for responding to each test varied from 1 to 23 (mean = 2.37, mode = 1, median = 2; SD = 2.4). For eight tests out of nine, mean test score achieved from all attempts to respond to the test was higher than mean test score achieved from the first attempt to respond to the test (see Table 1). The only exception was the test measuring knowledge and skills related to item analysis, whose mean test score from the first attempt to respond to this test was higher than the average test score from all attempts to respond to this test; this test was neither the easiest one, nor the most difficult, but the most number of attempts were made for solving it (see Table 1), which may be due to the fact that it contained more items with short answers that required formulation of an individual answer by the participant. Test–retest reliability was high enough for all tests, because the reliability coefficients based on correlation coefficients were above 0.3. Content validity of the tests was proven based on two experts’ opinions.
Two criteria for the level of academic motivation were chosen: a modified version of the questionnaire measuring the level of academic motivation created by Radoslavova and Velichkov (2005) [42], and a behavioral measure of academic motivation. The students who both failed their final exam at the first date and had low/weak academic motivation, measured by the modified version of the questionnaire authored by Radoslavova and Velichkov (2005) [42], were considered as low motivated (9 students who provided 46 answers, in total, to these 9 tests online; i.e., these students did not even respond once to each of these 9 tests measuring knowledge and skills related to psychological measurements—an average of 0.6 answers per test from students in this group). The students who worked successfully on several additional tasks during the academic year and were exempt from the final exam, and who had high/strong academic motivation, measured by the modified version of the questionnaire authored by Radoslavova and Velichkov (2005) [42], were considered as high motivated (10 students who provided 189 answers, in total, to these 9 tests online; i.e., an average of 2.1 answers per test from students in this group). The students who both successfully passed their exam and had medium/moderate level of academic motivation, measured by the modified version of the questionnaire authored by Radoslavova and Velichkov (2005) [42], were considered as moderately motivated (61 students who provided 991 answers, in total, to these 9 tests online; i.e., an average of 1.8 answers per test from students in this group). One item from the questionnaire authored by Radoslavova and Velichkov (2005, p. 47) [42]—“I regularly attend all the lectures because I am interested”, was modified into “I regularly participate in all the lectures because I am interested”, responded to a 4-point scale from 0 (“disagree”) to 3 (“agree”), because the teaching and learning processes were implemented online during the COVID-19 pandemic. Cronbach’s alpha of this questionnaire was 0.8, and its construct validity was established by means of positive and significant correlations with three long-term goals of growing as a competent specialist, contributing to the society as a specialist, and building one’s own professional image (Radoslavova and Velichkov, 2005) [42].

3. Results

Data processing was performed by means of SPSS 20, applying descriptive statistics, ANOVA, the Kruskal–Wallis non-parametric method, bootstrapping based on 5000 bootstrap samples, the Spearman rho correlation coefficient, the independent samples t-test, the Mann–Whitney U nonparametric method, and chi-square analysis. Effect size was also calculated.
Test scores were approximately normally distributed (skewness was 0.089 and kurtosis was −0.147). However, the number of attempts made for responding to a test was not normally distributed (skewness was 3.871 and kurtosis was 20.693). Some of the means and standard deviations of the test scores and number of attempts responding to a test made by the students with different learning motivation are described in Table 2.
There were some statistically significant differences between the students with low, medium, and high academic motivation in their test scores achieved (Levene test for homogeneity of variances (2, 1223) = 3.735, p = 0.024; Welch (2, 106.769) = 3.442, p = 0.036; F (2, 1223) = 4.000, p = 0.019) and in the number of attempts for responding to a test (Levene test for homogeneity of variances (2, 1223) = 9.923, p < 0.001; Welch (2, 329.962) = 99.874, p < 0.001; F (2, 1223) = 6.270, p = 0.002; Kruskal–Wallis Test = 28.275, df = 2, p < 0.001). These findings correspond to the results of other studies showing that students’ test scores are related to their motivation [4], test-taking motivation is related to test performance and achievement of test results [43], and that low achievers are less persistent in taking tests partly because they become demotivated [9].
The students with high academic motivation had the highest test scores, i.e., achieved the best results from online testing (see Table 2; p LSD = 0.011 for the difference between low-motivation students and the students with high academic motivation; bootstrapped confidence interval varied from −6.345 to −0.272 for the difference between low-motivation students and the students with high academic motivation; p LSD = 0.031 for the difference between moderately motivated students and the students with high academic motivation; bootstrapped confidence interval varied from −2.611 to −0.139 for the difference between moderately motivated students and the students with high academic motivation), but the students with low and medium academic motivation did not differ statistically significantly in their test scores (see Table 2; p LSD = 0.103 for the difference between low-motivation students and the students with medium academic motivation; bootstrapped confidence interval varied from −4.679 to 0.913 for the difference between low-motivation students and the students with medium academic motivation) in spite of the trend of the students with medium academic motivation receiving higher test scores than the students with low academic motivation. The low-motivation students made the least number of attempts to respond to a test compared with the students with medium (see Table 2; p LSD = 0.001; bootstrapped confidence interval varied from −1.438 to −1.064) and high academic motivation (see Table 1; p LSD = 0.001; bootstrapped confidence interval varied from −1.763 to −1.016). There were no statistically significant differences between the students with medium and high academic motivation in their number of attempts to respond to a test (see Table 2; p LSD = 0.505; bootstrapped confidence interval varied from −0.539 to 0.244), but there was a trend for highly motivated students to carry out more attempts to respond to a test than less motivated students.
The test scores did not correlate statistically significantly with the number of attempts made for responding to the test (r (1224) = 0.050, p = 0.082). Some of the average test scores and number of attempts responding to a test differentiated by gender are described in Table 3.
The male students studied achieved higher test scores than the female students studied (F Levene = 4.831, p = 0.028; t (78.624) = 4.141, p < 0.001; Mann–Whitney U = 28,667.000, p < 0.001; see Table 3). The male students were less prone to use the opportunity to make several attempts to respond to each test, striving for success with a higher score, but the gender differences concerning the number of attempts for solving a test were not statistically significant (t (1224) = 0.954, p = 0.340; Mann–Whitney U = 39,416.500, p = 0.852; see Table 3). These results are in accordance with the findings that male students outperform female students in math and science tests, but during longer testing, male students are more easily bored and female students are more able to sustain their performance during test-taking [42]. Higher test scores among male students could be also explained by the findings that female are more likely to experience and report higher levels of test anxiety than males [32]. Some of the frequency distribution of levels of academic motivation according to gender are described in Table 4.
The male students studied tended to have higher academic motivation more frequently than expected and the female students studied tended to have moderate/medium academic motivation more frequently than expected (see Table 4; χ (N = 1226; df = 2) = 39.735, p < 0.001; Phi = 0.180, i.e., small effect size). These results correspond to some studies that found that women reported less self-determined motivations than men in lecture-based learning courses [44], and men had higher motivation, both extrinsic and intrinsic, than women [45]. Some of the average test scores and number of attempts responding to a test differentiated by form of study are described in Table 5.
The full-time students studied achieved higher test scores (t (1224) = 2.648, p = 0.008; Mann–Whitney U = 128,072.000, p = 0.004; see Table 5) than the part-time students studied, supporting the finding of other studies that full-time students had higher qualification completion rates than part-time students [46] and that full-time students had higher average grades than part-time students [47]. The full-time students studied were more likely to make more attempts responding to a test than the part-time students studied, but the difference between them was not statistically significant (Mann–Whitney U = 141,983.000, p = 0.767; see Table 5). Some of the average test scores and number of attempts responding to a test differentiated by educational degree are described in Table 6.
The students in the study who were pursuing a master’s degree achieved higher test scores than those pursuing a bachelor’s degree (F Levene = 15.542, p < 0.001; t (1223.966) = 3.521, p < 0.001; Mann–Whitney U = 164,061.500, p < 0.001; see Table 6) and, obviously, a higher educational degree corresponded to more acquired knowledge and skills reflected in test performance. In the study, the students pursuing a bachelor’s degree were more likely to make more attempts responding to a test than the students pursuing a master’s degree, but the difference between them was not statistically significant (Mann–Whitney U = 177,821.500, p = 0.125; see Table 6). Some of the average test scores and number of attempts responding to a test differentiated by time of responding are described in Table 7.
There were some statistically significant differences between the students who responded to one or several online tests more than one month before the final exam, less than one month before the final exam, and during the final exam in the test scores achieved (F (2, 1223) = 11.888, p < 0.001) and in the number of attempts responding to a test (Levene test for homogeneity of variances (2, 1223) = 29.904, p < 0.001; Welch (2, 220.207) = 39.935, p < 0.001; F (2, 1223) = 15.883, p < 0.001; Kruskal–Wallis Test = 24.689, df = 2, p < 0.001; see Table 7). The students who responded to the tests more than one month before the final exam had significantly higher test scores than the students who responded to the tests less than one month before the final exam (see Table 7; p LSD = 0.022; bootstrapped confidence interval varied from 0.156 to 1.979) or during the final exam (see Table 7; p LSD < 0.001; bootstrapped confidence interval varied from 3.603 to 7.271). The students who responded to the tests less than one month before the final exam had significantly higher test scores than the students who responded to the tests during the final exam (see Table 7; p LSD < 0.001; bootstrapped confidence interval varied from 2.560 to 6.167). The students who responded to the tests more than one month before the final exam made more attempts responding to the tests than the students who responded to the tests less than one month before the final exam (see Table 7; p LSD < 0.001; bootstrapped confidence interval varied from 0.398 to 0.948) or during the final exam (see Table 7; p LSD < 0.001; bootstrapped confidence interval varied from 1.045 to 1.658). The students who responded to the tests less than one month before the final exam were more likely to make more attempts responding to the tests than the students who responded to a test during the final exam (see Table 7; p LSD = 0.05; bootstrapped confidence interval varied from 0.453 to 0.905). Some of the frequency distribution of levels of academic motivation according to the time of test completion are described in Table 8.
The students who completed the tests more than one month before the final exam had high academic motivation significantly more often than expected; the students who completed the tests less than one month before the final exam had moderate/medium academic motivation significantly more often than expected; and the students who completed the tests during their exam had low academic motivation significantly more often than expected (see Table 8; χ (N = 1226; df = 4) = 33.416, p < 0.001; Phi = 0.165, i.e., small effect size).

4. Discussion

It may be concluded that the highest academic motivation was manifested in the best students’ performance, because the students with high academic motivation had the highest test scores, i.e., achieved the best results from online testing. It also may be concluded that the lowest academic motivation was expressed in the least efforts expended in the learning process, because the low-motivation students made the least number of attempts to respond to a test compared with the students with medium and high academic motivation. These results correspond with the findings that students who received higher grades in frequent assessments had higher intrinsic motivation [30], as well as to the findings that students achieve higher test performance when they invest more effort [31], that better students’ performance was associated with more frequent testing [33], and that testing stimulates learners to commit more effort to the learning process, boosting learning motivation, which raises the students’ academic achievement [17].
Online testing, in this study, attempted to stimulate learning motivation by granting students permission for an unlimited number of attempts to respond to each test, but there were still a few students with low academic motivation who did not expend enough effort to master the course content. If the students believe that they are capable of achieving success and if they value the activity, they will be highly motivated to learn [36]. If the students value the outcome but believe that no matter how hard they try, they probably will not succeed, or if the activity holds no importance for them, their motivation will be weak [36]. In the educational process, there are always students who fail their exams; therefore, this fact cannot be attributed only to e-learning and online testing. The smallest group of students in this study were those that possessed low motivation, which may be regarded as evidence of the possibility for online testing to stimulate learning motivation. The students achieve more when frequent testing is implemented [32]. That is why the current study used continuous frequent testing that familiarized the students with the online testing procedure, the students were provided the possibility for multiple attempts to respond to each test until they were satisfied with their results, their highest test score was used for their grade (which should minimize their test anxiety), and they received feedback after each testing, which permitted them to learn from their testing experience.
Additional evidence that online testing is a means of stimulating students’ academic motivation was provided by the findings that the most answers (49.9%) were provided a long time (more than one month) before the exam date, and that the students who completed the tests more than one month before the final exam had higher academic motivation, expended more effort (made more attempts responding to the tests), and received higher test scores. An additional 46% of the answers were provided within one month of the deadline, and the students who completed the tests less than one month before the final exam had moderate/medium academic motivation significantly more often than expected. One of the scientific contributions of this study was that it revealed the possibility of online testing to stimulate long-term learning motivation.
Although this study examines, for the first time, the abovementioned parameters in a sample of Bulgarian university students, some limitations of the study are related to the properties of the tests applied online: the tests were non-standardized, and two of them contained less than ten items. There is a trend for longer tests to be more reliable and with better content validity [6].
Another possible limitation is related to the finding that the learners from the countries with high socioeconomic status more frequently enrolled in massive open online courses, more frequently completed them, and more frequently paid for a certificate from such a course [48]. In accordance with this finding, it is possible the socioeconomic status of the students interacts in some way with their learning motivation and performance; however, in the present study, all the students had access to a computer or a mobile device—smartphone, tablet, etc.—and all of them were provided the possibility to respond to the online tests multiple times when they decided to during the semester, at a time and place convenient to them. Because the students could both study and work in this manner, the influence of their socioeconomic status on their learning motivation and academic performance was minimized. At the same time, the opportunity for responding to each online test multiple times, when and where the student decided, during the semester, minimized the significance of any difficulty related to an unavailable Internet connection.
Another limitation of this study may be related to the sudden shift towards e-learning and online testing, accordingly. However, the students enjoy research activities, including tests [49], and the participating students were able to work with computer and mobile devices that should facilitate their ability to cope with online testing situations. Students’ test-taking skills (i.e., their cognitive skills and knowledge of how to behave before, during, and after testing) correlate positively with their attitudes towards tests, their motivation to learn, and their attitudes towards the academic subject, and correlate negatively with their test anxiety, variables which all play a significant role in students’ level of achievement [50].
Providing the possibility for multiple responses to a test, with feedback about the result achieved after each attempt, and grading only the best result, should suppress the impulse for cheating in order to achieve higher test scores. In addition, this approach should diminish test anxiety. This seems to be one of advantages of online testing in the present study. In addition, frequent testing provides an updated measure of students’ progress [51] and formative assessment gathers evidence of students’ understanding [52], which permitted teaching to be adjusted to the students’ needs.

5. Conclusions

This study provides empirical evidence regarding the possibilities for using testing, and especially online testing, for improving learning motivation under conditions that diminish anxiety, such as unlimited time for answering, an unrestricted number of attempts for responding to a test, and providing task-focused feedback after each attempt, i.e., with the opportunity for learning from own previous mistakes and for the correction of one’s own answers. This study also revealed the possibility of online testing to stimulate long-term learning motivation and maintain academic motivation during the whole academic year. In the present study, the Bulgarian university students who participated most were motivated to learn in a medium or moderate degree, measured both with a personality questionnaire and behaviorally, through success or failure on their first final exam date. This finding corresponds to the essence of norm-oriented testing, with the largest group of participants receiving medium results, which may be interpreted as evidence of the objectivity of the research. Highly motivated students achieved the highest test scores in online testing, they made the most attempts to respond to each test, and they preferred to finish their interim control online testing assignments a long time before the final exam date, so the highest test scores were achieved more than one month before the final exam. Highly motivated students expended more effort for their successful performance, demonstrated by more online testing assessment attempts, and they used online testing, with the opportunity for an unrestricted number of answers, as an additional mode of learning, from which they benefitted more than the students with lower motivational levels.
Future research may be directed to studying the mixed impact of the combination of online testing with gamification or another educational approach on academic motivation and academic performance. Further research may also focus on comparison of the possibilities of online testing to stimulate learning motivation and academic performance in different educational and scientific fields.

Author Contributions

Conceptualization, S.S. and V.G.; methodology, S.S. and V.G.; software, S.S. and V.G.; validation, S.S. and V.G.; formal analysis, S.S. and V.G.; investigation, S.S. and V.G.; resources, S.S. and V.G.; data curation, S.S. and V.G.; writing—original draft preparation, S.S. and V.G.; writing—review and editing, S.S. and V.G.; supervision, S.S. and V.G.; project administration, S.S. and V.G.; funding acquisition, S.S. and V.G. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by European University Cyprus.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki. It was a part of university e-classes, and it was recorded as a part of these e-classes; therefore, its approval by the Institutional Review Board-Ethics Committee of the university was not mandatory.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the first author. The data are not publicly available due to privacy restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Daud Mahande, R.; Tekpen, A. Motivational factors underlying the use of online learning system in higher education: An analysis of measurement model. Turk. Online J. Distance Educ.-TOJDE 2021, 22, 89–105. [Google Scholar] [CrossRef]
  2. Evers, A.; McCormick, C.M.; Hawley, L.R.; Muñiz, J.; Balboni, G.; Bartram, D.; Boben, D.; Egeland, J.; El-Hassan, K.; Fernández-Hermida, J.R.; et al. Testing practices and attitudes toward tests and testing: An international survey. Int. J. Test. 2017, 17, 158–190. [Google Scholar] [CrossRef] [Green Version]
  3. Efremova, N.; Shvedova, S.; Huseynova, A. The influence of assessment on learning motivation. In SHS Web of Conferences: Trends in the Development of Psycho-Pedagogical Education in the Conditions of Transitional Society (ICTDPP-2019), Rostov-on-Don, Russia, 22–23 November, 2019; Abakumova, I.V., Vorobyova, E.V., Eds.; EDP Sciences: Les Ulis, France, 2019; Volume 70. [Google Scholar] [CrossRef] [Green Version]
  4. Lathrop, A. Impact of Student Motivation in Online Learning Activities. Master’s Dissertation, University of Nebraska-Lincoln, Lincoln, NE, USA, 2011. Theses, Dissertations, and Student Research in Agronomy and Horticulture, 24. Available online: https://digitalcommons.unl.edu/agronhortdiss/24 (accessed on 20 November 2022).
  5. Knekta, E. Motivational Aspects of Test-Taking: Measuring Test-Taking Motivation in Swedish National Test Contexts; Umeå University: Umeå, Sweden, 2017; Available online: https://umu.diva-portal.org/smash/get/diva2:1071134/FULLTEXT01.pdf (accessed on 20 November 2022).
  6. Van Blerkom, M.L. Measurement and Statistics for Teachers; Routledge: New York, NY, USA, 2009. [Google Scholar]
  7. Baumert, J.; Demmrich, A. Test motivation in the assessment of student skills: The effects of incentives on motivation and performance. Eur. J. Psychol. Educ. 2001, 16, 441–462. [Google Scholar] [CrossRef]
  8. Kingston, N.M.; Scheuring, S.T.; Kramer, L.B. Test development strategies. In APA Handbook of Testing and Assessment in Psychology; Geisinger, K.F., Bracken, B.A., Carlson, J.F., Hansen, J.-I.C., Kuncel, N.R., Reise, S.P., Rodriguez, M.C., Eds.; American Psychological Association: Washington, DC, USA, 2013; Volume 1, pp. 165–184. [Google Scholar]
  9. Harlen, W.; Crick, R.D. Testing and motivation for learning. Assess. Educ. Princ. Policy Pract. 2003, 10, 169–207. [Google Scholar] [CrossRef]
  10. Zakarneh, B.; Al-Ramahi, N.; Mahmoud, M. Challenges of teaching English language classes of slow and fast learners in the United Arab Emirates universities. Int. J. High. Educ. 2020, 9, 256–269. [Google Scholar] [CrossRef]
  11. Varghese, S.S.; Aneesa, N. Teaching slow learners and fast learners sepreatly in small group teaching in dental school-students perception, concern and impact. Int. J. Dent. Oral Sci. (IJDOS) 2021, 8, 2025–2030. [Google Scholar] [CrossRef]
  12. Raj, A.D.; Agarwal, K.; Jain, S.C.; Sharma, M.C.; Shukla, S.; Kumar, S. Unit 9 Commonly Used Tests in Schools; IGNOU Publisher: New Delhi, India, 2018; Available online: http://egyankosh.ac.in//handle/123456789/46950 (accessed on 20 November 2022).
  13. Bohndick, C.; Menne, C.M.; Kohlmeyer, S.; Buhl, H.M. Feedback in internet-based self-assessments and its effects on acceptance and motivation. J. Furth. High. Educ. 2019, 44, 717–728. [Google Scholar] [CrossRef]
  14. Attali, Y.; Powers, D. Immediate feedback and opportunity to revise answers to open-ended questions. Educ. Psychol. Meas. 2010, 70, 22–35. [Google Scholar] [CrossRef]
  15. Wise, S.L.; Plake, B.S.; Pozehl, B.J.; Barnes, L.B.; Lukin, L.E. Providing item feedback in computer-based tests: Effects of initial success and failure. Educ. Psychol. Meas. 1989, 49, 479–486. [Google Scholar] [CrossRef]
  16. Cole, M.T.; Shelley, D.J.; Swartz, L.B. Online instruction, e-learning, and student satisfaction: A three-year study. Int. Rev. Res. Open Distance Learn. 2014, 15, 111–131. [Google Scholar] [CrossRef]
  17. Yang, C.; Luo, L.; Vadillo, M.A.; Yu, R.; Shanks, D.R. Testing (quizzing) boosts classroom learning: A systematic and meta-analytic review. Psychol. Bull. 2021, 147, 399–435. [Google Scholar] [CrossRef] [PubMed]
  18. Alonzo, A.C. Defining trustworthiness for teachers’ multiple uses of classroom assessment results. In Classroom Assessment and Educational Measurement; Brookhart, S.M., McMillan, J.H., Eds.; Routledge: New York, NY, USA, 2020; pp. 120–145. [Google Scholar]
  19. UNESCO International Bureau of Education; Muskin, J.A. Continuous Assessment for Improved Teaching and Learning: A Critical Review to Inform Policy and Practice. 2017. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000255511 (accessed on 20 November 2022).
  20. Holmes, N. Engaging with assessment: Increasing student engagement through continuous assessment. Act. Learn. High. Educ. 2017, 19, 23–34. [Google Scholar] [CrossRef] [Green Version]
  21. Yovkov, L. Students’ preferences for delivery modes of content in e-learning. In Scientific Works of the Union of Scientists in Bulgaria—Plovdiv. VIIIth International Conference of Young Scientists, 23–26 July 2020, Plovidiv. Series B. Natural Sciences and the Humanities; Andonov, V., Panchovska-Mocheva, M., Hadziev, B., Vasilev, V., Dimitrakov, D., Todorov, Y., Kostadinova-Georgieva, L., Panayotov, N., Andreeva, T., Vasilev, S., et al., Eds.; Union of Scientists Plovdiv: Plovdiv, Bulgaria, 2020; Volume XX, pp. 49–52. [Google Scholar]
  22. Hariri-Akbari, M.; Shokrvash, B.; Mahmoodi, F.; Jahanjoo-Aminabad, F.; Yousefi, B.; Azabdaftari, F. Conversion of extrinsic into intrinsic motivation and computer based testing (CBT). BMC Med. Educ. 2018, 18, 143. [Google Scholar] [CrossRef] [PubMed]
  23. Silm, G.; Must, O.; Täht, K.; Pedaste, M. Does test-taking motivation predict test results in a high-stakes testing context? Educ. Res. Eval. 2021, 26, 387–413. [Google Scholar] [CrossRef]
  24. Totkov, G.; Doneva, R.; Gaftandzhieva, S.; Somova, E.; Hadzhikoleva, S.; Kasakliev, N.; Kiryakova, G.; Angelova, N.; Raykova, M.; Kostadinova, H.; et al. Uvod v E-Obuchenieto [Introduction to E-Learning]; Rakursi LTD: Plovdiv, Bulgaria, 2014; Volume 1. [Google Scholar]
  25. Fung, C.Y.; Su, S.I.; Perry, E.J.; Garcia, M.B. Development of a socioeconomic inclusive assessment framework for online learning in higher education. In Socioeconomic Inclusion during an Era of Online Education; Garcia, M.B., Ed.; IGI Global: Hershey, PA, USA, 2022; pp. 23–46. [Google Scholar]
  26. Stoyanova, S.; Yovkov, L. Educational objectives in e-learning. Int. J. Humanit. Soc. Sci. Educ. 2016, 3, 8–11. [Google Scholar] [CrossRef]
  27. Brame, C.J.; Biel, R. Test-enhanced learning: The potential for testing to promote greater learning in undergraduate science courses. CBE—Life Sci. Educ. 2015, 14, es4. [Google Scholar] [CrossRef] [Green Version]
  28. Wiliam, D. Feedback and instructional correctives. In SAGE Handbook of Research on Classroom Assessment; McMillan, J.H., Ed.; SAGE Publications: Thousand Oaks, CA, USA, 2013; pp. 197–214. [Google Scholar]
  29. Elshareif, E.; Mohamed, E. The effects of e-learning on students’ motivation to learn in higher education. Online Learn. 2021, 25, 128–143. [Google Scholar] [CrossRef]
  30. Vaessen, B.E.; van den Beemt, A.; van de Watering, G.; van Meeuwen, L.W.; Lemmens, L.; den Brok, P. Students’ perception of frequent assessments and its relation to motivation and grades in a statistics course: A pilot study. Assess. Eval. High. Educ. 2017, 42, 872–886. [Google Scholar] [CrossRef] [Green Version]
  31. Penk, C.; Pöhlmann, C.; Roppelt, A. The role of test-taking motivation for students’ performance in low-stakes assessments: An investigation of school-track-specific differences. Large-Scale Assess. Educ. 2014, 2, 5. [Google Scholar] [CrossRef]
  32. Thomsen, M.K.; Seerup, J.K.; Dietrichson, J.; Bondebjerg, A.; Viinholt, B.C.A. Protocol: Testing frequency and student achievement: A systematic review. Campbell Syst. Rev. 2022, 18, e1212. [Google Scholar] [CrossRef]
  33. Bangert-Drowns, R.L.; Kulik, J.A.; Kulik, C.-L.C. Effects of frequent classroom testing. J. Educ. Res. 1991, 85, 89–99. [Google Scholar] [CrossRef]
  34. McGatha, M.B.; Bush, W.S. Classroom assessment in mathematics. In SAGE Handbook of Research on Classroom Assessment; McMillan, J.H., Ed.; SAGE Publications: Thousand Oaks, CA, USA, 2013; pp. 449–460. [Google Scholar]
  35. Al Salmi, S.; AlMajeed, S.S.; Karam, J. Online Exams for Better Students’ Performance. In 9th International Conference on Education, Teaching & Learning (ICE 19), 26–28 April; Wager College: New York, NY, USA, 2019; Available online: http://eprints.glos.ac.uk/id/eprint/6803 (accessed on 20 November 2022).
  36. McMillan, J.H. Classroom Assessment: Principles and Practice That Enhance Student Learning and Motivation, 7th ed.; Pearson: London, UK, 2018. [Google Scholar]
  37. Hamdan, K.; Amorri, A. The impact of online learning strategies on students’ academic performance. In E-Learning and Digital Education in the Twenty-First Century; Shohel, M.M.C., Ed.; IntechOpen: London, UK, 2022; Chapter 3; pp. 1–19. [Google Scholar] [CrossRef]
  38. Ross, B.; Chase, A.-M.; Robbie, D.; Oates, G.; Absalom, Y. Adaptive quizzes to increase motivation, engagement and learning outcomes in a first year accounting unit. Int. J. Educ. Technol. High. Educ. 2018, 15, 30. [Google Scholar] [CrossRef] [Green Version]
  39. Giannouli, V. What is the next small big thing in psychology? Psychol. Thought 2017, 10, 1–6. [Google Scholar] [CrossRef] [Green Version]
  40. Bloom, B.S.; Engelhart, M.D.; Furst, E.D.; Hill, W.H.; Krathwogl, D.R. (Eds.) Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook 1: Cognitive Domain; Longmans: Ann Arbor, MI, USA, 1956. [Google Scholar]
  41. Stoimenova, E. Measurement Properties of Tests; Institute of Mathematics and Informatics: Sofia, Bulgaria, 2000. [Google Scholar]
  42. Radoslavova, M.; Velichkov, A. Methods for Psychodiagnostics; Pandora Prim: Sofia, Bulgaria, 2005. [Google Scholar]
  43. Balart, P.; Oosterveen, M. Females show more sustained performance during test-taking than males. Nat. Commun. 2019, 10, 3798. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Stolk, J.D.; Gross, M.D.; Zastavker, Y.V. Motivation, pedagogy, and gender: Examining the multifaceted and dynamic situational responses of women and men in college STEM courses. Int. J. STEM Educ. 2021, 8, 35. [Google Scholar] [CrossRef]
  45. Hakan, K.; Münire, E. Academic motivation: Gender, domain and grade differences. Procedia-Soc. Behav. Sci. 2014, 143, 708–715. [Google Scholar] [CrossRef]
  46. New Zealand Government. Education Counts. 2022. Available online: https://www.educationcounts.govt.nz/statistics/achievement-and-attainment (accessed on 20 November 2022).
  47. Cocks, T. Part Time V Fulltime Students, What Are the Differences in Academic Results and Engagement? 2019. Available online: https://rstudio-pubs-static.s3.amazonaws.com/545577_92d8e15dac8a439fb2bba4963041c6eb.html (accessed on 20 November 2022).
  48. Ruipérez-Valiente, J.A. A macro-scale MOOC analysis of the socioeconomic status of learners and their learning outcomes. In Socioeconomic Inclusion during an Era of Online Education; Garcia, M.B., Ed.; IGI Global: Hershey, PA, USA, 2022; pp. 1–22. [Google Scholar]
  49. Lamanauskas, V. Reflections on Education; Scientia Socialis Press: Šiauliai, Lithuania, 2017. [Google Scholar]
  50. Dodeen, H.M.; Abdelfattah, F.; Alshumrani, S. Test-taking skills of secondary students: The relationship with motivation, attitudes, anxiety and attitudes towards tests. South Afr. J. Educ. 2014, 34, 866. [Google Scholar]
  51. Black, P. Formative and summative aspects of assessment: Theoretical and research foundations in the context of Pedagogy. In SAGE Handbook of Research on Classroom Assessment; McMillan, J.H., Ed.; SAGE Publications: Thousand Oaks, CA, USA, 2013; pp. 167–178. [Google Scholar]
  52. McMillan, J.H. Why we need research on classroom assessment. In SAGE Handbook of Research on Classroom Assessment; McMillan, J.H., Ed.; SAGE Publications: Thousand Oaks, CA, USA, 2013; pp. 3–16. [Google Scholar]
Table 1. Psychometric properties of the tests measuring knowledge and skills related to psychological measurements.
Table 1. Psychometric properties of the tests measuring knowledge and skills related to psychological measurements.
Knowledge and Skills Measured by the Test Related to:Number of ItemsMaximum Possible ScoreAverage Number of Attempts for Responding to This TestMean Test Score Achieved from All Attempts to Respond to the Test (Standard Deviation)Mean Test Score Achieved from the First Attempt to Respond to the Test (Standard Deviation)Test Difficulty as Percentage of the Mean Test Score from all Attempts to Respond to the Test Divided by the Maximum Possible Test ScoreTest Difficulty as Percentage of the Mean Test Score from the First Attempt to Respond to the Test Divided by the Maximum Possible Test ScoreTest–Retest Reliability Coefficient within One Week Time Interval between Each Testing
Main concepts19281.7622.37 (5.9)21.19 (6.39)79.975.70.612
Levels of measurement34382.4228.23 (9.00)26.39 (9.16)74.369.40.590
Test construction9251.5521.52 (4.78)21.25 (4.88)86.1850.577
Item formats14213.0714.49
(5.13)
13.48
(5.35)
6964.20.821
Item analysis19243.3717.14 (7.15)18.12 (6.07)71.475.50.632
Reliability and validity18232.1716.99 (5.5)15.39 (6.29)73.966.90.824
Norms21232.0518.81
(4.15)
17.41 (4.34)81.875.70.798
Interpretation of results from testing7102.337.07 (2.41)6.51 (2.27)70.765.10.756
Test situation, translation of questionnaires, measurement of attitudes, multidimensional scaling14191.4215.97
(3.89)
15.04 (4.24)84.179.20.757
Table 2. Means and standard deviations of the test scores and number of attempts responding to a test made by the students with different learning motivation.
Table 2. Means and standard deviations of the test scores and number of attempts responding to a test made by the students with different learning motivation.
NMSDStandard ErrorMinimumMaximum
Test score achievedlow motivation4616.789.6771.427137
medium motivation99118.737.8340.249038
high motivation18920.107.9440.578438
Number of attempts responding to the testlow motivation461.150.3630.05412
medium motivation9912.402.4560.078123
high motivation1892.532.5610.186116
Table 3. Average test scores and number of attempts responding to a test differentiated by gender.
Table 3. Average test scores and number of attempts responding to a test differentiated by gender.
GenderNMSDStd. Error Mean
Test score achievedmale6922.327.0680.851
female115718.667.9510.234
Number of attempts responding to the testmale692.101.4670.177
female11572.392.4840.073
Table 4. Frequency distribution of levels of academic motivation, according to gender.
Table 4. Frequency distribution of levels of academic motivation, according to gender.
Level of Academic Motivation
Low MotivationMedium MotivationHigh Motivation
GenderMaleobserved count 23829
expected Count2.655.810.6
% within gender2.9%55.1%42.0%
Femaleobserved count 44953160
expected Count43.4935.2178.4
% within gender3.8%82.4%13.8%
Table 5. Average test scores and number of attempts responding to a test differentiated by form of study.
Table 5. Average test scores and number of attempts responding to a test differentiated by form of study.
Form of StudyNMSDStd. Error Mean
Test score achievedfull-time91119.227.9140.262
part-time31517.857.9620.449
Number of attempts responding to the testfull-time9112.472.6840.089
Part-time3152.101.4880.084
Table 6. Average test scores and number of attempts responding to a test differentiated by educational degree.
Table 6. Average test scores and number of attempts responding to a test differentiated by educational degree.
Educational DegreeNMSDStd. Error Mean
Test score achievedbachelor’s66218.158.4450.328
master’s56419.727.2320.305
Number of attempts responding to the testbachelor’s6622.582.9200.113
master’s5642.131.6790.071
Table 7. Average test scores and number of attempts responding to a test differentiated by time of responding.
Table 7. Average test scores and number of attempts responding to a test differentiated by time of responding.
Educational DegreeNMSDStd. Error Mean
Test score achievedmore than one month before the final exam61219.588.180.331
less than one month before the final exam56418.527.670.323
during the final exam5014.186.090.861
Number of attempts responding to the testmore than one month before the final exam6122.743.050.123
less than one month before the final exam5642.071.580.067
during the final exam501.380.670.094
Table 8. Frequency distribution of levels of academic motivation, according to the time of test completion.
Table 8. Frequency distribution of levels of academic motivation, according to the time of test completion.
Academic Motivation
Low MotivationMedium MotivationHigh Motivation
Time of completion of the testmore than one month before the examCount23467122
Expected count23.0494.794.3
% within time of completion of the test3.8%76.3%19.9%
less than one month before the examCount1748067
Expected count21.2455.986.9
% within time of completion of the test3.0%85.1%11.9%
during the examCount6440
Expected count1.940.47.7
% within time of completion of the test12.0%88.0%0.0%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Stoyanova, S.; Giannouli, V. Online Testing as a Means of Enhancing Students’ Academic Motivation during the Coronavirus Pandemic. Educ. Sci. 2023, 13, 25. https://doi.org/10.3390/educsci13010025

AMA Style

Stoyanova S, Giannouli V. Online Testing as a Means of Enhancing Students’ Academic Motivation during the Coronavirus Pandemic. Education Sciences. 2023; 13(1):25. https://doi.org/10.3390/educsci13010025

Chicago/Turabian Style

Stoyanova, Stanislava, and Vaitsa Giannouli. 2023. "Online Testing as a Means of Enhancing Students’ Academic Motivation during the Coronavirus Pandemic" Education Sciences 13, no. 1: 25. https://doi.org/10.3390/educsci13010025

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop