Next Article in Journal
Distributed Acoustic Sensing Based on Coherent Microwave Photonics Interferometry
Next Article in Special Issue
Accurate Pupil Center Detection in Off-the-Shelf Eye Tracking Systems Using Convolutional Neural Networks
Previous Article in Journal
Real-Time Detection of Non-Stationary Objects Using Intensity Data in Automotive LiDAR SLAM
Previous Article in Special Issue
Assessment of the Effect of Cleanliness on the Visual Inspection of Aircraft Engine Blades: An Eye Tracking Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

IEyeGASE: An Intelligent Eye Gaze-Based Assessment System for Deeper Insights into Learner Performance

by
Chandrika Kamath Ramachandra
* and
Amudha Joseph
Department of Computer Science and Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru 560047, India
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(20), 6783; https://doi.org/10.3390/s21206783
Submission received: 11 September 2021 / Revised: 1 October 2021 / Accepted: 6 October 2021 / Published: 13 October 2021
(This article belongs to the Special Issue Eye Tracking Techniques, Applications, and Challenges)

Abstract

:
In the current education environment, learning takes place outside the physical classroom, and tutors need to determine whether learners are absorbing the content delivered to them. Online assessment has become a viable option for tutors to establish the achievement of course learning outcomes by learners. It provides real-time progress and immediate results; however, it has challenges in quantifying learner aspects like wavering behavior, confidence level, knowledge acquired, quickness in completing the task, task engagement, inattentional blindness to critical information, etc. An intelligent eye gaze-based assessment system called IEyeGASE is developed to measure insights into these behavioral aspects of learners. The system can be integrated into the existing online assessment system and help tutors re-calibrate learning goals and provide necessary corrective actions.

1. Introduction

Learning assessment is a fundamental feedback mechanism that allows the stakeholders to understand what is being learned and where the learning resources need to be focused. Assessment can take different modalities based on the purpose. It can either be summative or formative [1,2]. When an assessment is a part of the teaching process, carried out in a classroom and involves evaluator observation, feedback, homework, etc., it is called a formative assessment. It is relevant to understand the learning needs of learners and to adjust the instruction accordingly. Summative assessments determine to what extent the learners have achieved the learning goals and acquired the critical knowledge and skills related to educational content. They are usually conducted at the end of an instructional unit. Summative and formative assessments can be either objective or subjective. Subjective is a form of questioning where there is more than one way of expressing the answer. Objective assessment includes true/false, multiple-choice, and matching questions.
Multiple-choice questions have a strong association with assessing lower-order cognition, such as recall of concepts, and hence, they are popularly used in online assessment in higher education. The education sector is slowly replacing the traditional methods of pen and paper assessment with online assessment. The online evaluation produces quick results on the learners’ progress. It provides insight into how they are doing and what areas of learning require attention [3,4]. Educators use various tools like Google Form, Socrative [5], Mentimeter [6], etc., for online assessment. Most of these tools help create multiple choice-based assessments and provide quick feedback on the learner’s performance. However, the feedback is based on the score, the option chosen, time spends on each question, response time, etc. It provides whether the learner has memorized the concepts or not. Well written multiple-choice-based assessment can also evaluate higher-order thinking such as application, creativity and analytics skills. Intelligent tutoring systems by [7] predicts learner performance via summative and formative assessments and also predict learners at risk of failure in the final evaluation. However, these do not provide deeper insights into how the learner’s answered the questions, what was perceived, was it a mere guess, was there a state of confusion, what factors lead to answering the correct option or incorrect option etc. An objective assessment that can quantify the cognitive parameters of learners can explain the factors that lead to correct or incorrect answers. Cognitive parameters provides insights into the mental process of gaining knowledge like engagement in the task, the order and time for processing information, time spent on crucial details in the task, etc of a learner. Depending upon the skill set to measure, researchers use various tests like the Predictive Index Learning Indicator test [8], McQuaig Mental Agility Test [9], Progress test [10] and technologies like EEG, eye tracking etc [11,12,13,14] to measure the cognitive parameters. Progress test is one of the assessment methods of the cognition domain used to access students on topics from medical disciplines. In this research work, we use eye tracking technology to develop an intelligent system that can quantify the cognitive parameters and provide feedback on the learner performance in an objective-based online assessment for programming related courses.
Eye tracking is a sensor technology that lets a computer detect the attention and focus of the learner. The technology investigates the visual attention based on the eye–mind hypothesis [15]. According to the hypothesis, there is a close relationship between eye gaze and attention while performing any task involving information processing. It has been applied in various research areas such as reading [16,17,18,19,20], comprehension [21,22], information processing [18,23], human-computer interactions [24,25], skill identification [26,27], problem solving [28,29], e-learning and learner evaluation [3,30,31,32,33,34,35]. These studies provide an insight into the cognitive strategies of individuals, amount of attention, difficulties in reading, and their learning process. These research studies prove that there is a link between eye-gaze patterns and the cognitive process. Eye tracking technology is best suited for multiple-choice (MC) assessment. It allows recording the learner’s attention distribution while solving a task without placing any extra load on the learner’s working memory. Moreover, the recordings are objective and provide data, both temporal and spatial, that can reveal unconscious cognitive events like decision-making behaviour that are not accessible from external observations. To investigate the decision-making behavior of learners in an MC assessment, we applied Bloom’s taxonomy [36]. Bloom’s Taxonomy is a hierarchical way of classifying levels of thinking and applied by universities to design course objectives. It has six levels of thinking - Remember, Understand, Apply, Analyze, Evaluate and Create. Bloom’s Taxonomy is explained in detail in Section 4.1. In this research work, we focused on the basic level of thinking called the remember, where the learner must recall the information from the long-term memory. It will help us in probing questions like why the learner cannot perform, whether he hasn’t understood the concept/learned the concept, whether he overlooked the question, whether the multiple-choice question is generally designed with confusing options and make the learner struggle to perform? Eye tracking data will provide insight into these aspects and helps to redefine the specific goals and objectives based on the feedback and recommendations.
In this research work, we developed an intelligent system called “Intelligent Eye Gaze learner AsSEssment System” (IEyeGASE) to analyze the eye gaze of the learners taking an online assessment and map to cognitive parameters to provide deeper insights into their performance. The IEyeGASE system can be integrated with an online assessment system to quantify the learners’ performance based on engagement in the task, attention to critical information in the task, wavering behaviour, knowledge acquisition and confusion between various options.
The following section will discuss relevant works in educational science and eye tracking research for computer-based testing and then the proposed methodology for developing the intelligent eye gaze-based learner assessment system. This section will discuss the methodology, various hypothesis and cognitive parameters like wavering behaviour, engagement in the task, inattentional blindness etc, that will provide insights into a learner’s performance. In the following section, the materials used for displaying MCQs, eye tracking study apparatus, and participants’ details followed by results and a comparison of the intelligent system with a traditional pen and paper approach and online assessment platform.

2. Related Works

Despite the best efforts from the academicians, learners struggle with programming activities particularly in a freshman course [37]. Various analytical methods [38,39] and e-learning platforms [30,32,40] are used to improve the learning experience of learners. All these research works focused on learning design and its evaluation but very few works [3,4,28,29,31,34,35,41,42] focused on learner’s decision-making, lesson-specific skills and problem-solving skills. The study [2], discussed on use of computer-based tests to support and access student learning. They investigated the effects of adapting computer-based test items according to multimedia learning guidelines. The results indicate that adapting multimedia guidelines can lower students’ difficulty, lead to more attention in questions and answers, and reduce their cognitive load.
Speech analytics were used in the traditional assessment to assess the reading proficiency of the learners [43]. Machine learning algorithms were further used to evaluate the knowledge level of learners based on the expert formulated responses [44]. Non-intrusive methods were then used to assess the learner behavior during assessment [3,4,31,34,35]. These research works used Multiple-Choice Questions (MCQs) to test the learner’s knowledge.
Eye tracking research is increasingly used in educational science in three major areas- improving the instructional design of computer-based learning and testing environment, developing expertise in visual domains, and promoting visual expertise using eye tracking modelling examples [45]. The instructional design investigates how learning a new skill or knowledge by optimally designing the learning material. Eye tracking can help understand how learners process instructional material and which processes learners should devote to achieving learning gains efficiently [46]. Eye tracking research is extensively used to develop expertise and understand how long-term memory structures influence how we see and interpret our environment [21,47,48]. Lastly, eye tracking is applied in educational science is to investigate how visual expertise can be trained with the help of instructional videos of real-world tasks that experts explain in the field. This is mostly used in guiding novice learners [49,50,51]. Our research work focuses on applying eye tracking to understand learner behaviour in a computer-based testing environment. In the following section, we discuss a few of the relevant works in this area.
Eye tracking is used to inspect the learner’s visual attention while solving multiple-choice questions in mathematical problems related to the Pythagorean theorem and related factors [34]. Percentage gaze time on the AOIs was investigated to find the most probable choices to questions. The possible choices had a higher percentage gaze time than the less probable ones. As soon as the most probable choices were recognized, a wavering behavior between the two alternatives was observed in testers. Wavering behavior was found to be less once the tester gives the first answer. Then gaze patterns were compared with previously stored patterns to predict the success of the solution. The research work inspected the way testers dealt with MCQs and provided these inputs to develop an intelligent tool to help learners learn better. In another work by [52] AOI coverage is used as a metric to compare the natural language text and source code reading behaviour of expert and novice. A two level abstraction was considered, element/word level and line level. The segmenting of stimuli into multiple levels of abstraction revealed detailed insight into the reading behaviour of novice and expert. A similar study was conducted [3] to solve the MCQs in science problems. The study shows that the successful problem solvers gazed at relevant options, and unsuccessful problem solvers struggled in understanding the problem and recognizing the most relevant factors. Percentage fixation duration was used to evaluate the attention on appropriate choices.
An eye tracking system was developed to predict a computer-based assessment performance system for physics concept questions [31]. The study investigated the eye tracking features for predicting performance of learners. The results indicate that mean fixation duration and re-reading time was more probable for predicting the correct options, whereas mean saccade time was less likely to predict the correct option. Pictorial and text representation were presented to learners. Pictorial representation was found to be more preferred by learners. The importance of gaze bias effect for decision-making process of learners in knowledge assessment was investigated in [4]. Eye gaze patterns of High Prior Knowledge (HPK) and Low Prior Knowledge (LPK) were recorded. HPK learners and LPK learners were found to have a gaze bias effect on options that are subjectively preferred, whereas HPK learners fixated more on the objectively correct option. 21 MCQs were presented to the learners, and each one has correct, attractive, and non-attractive options. Percentage gaze duration was the used for analysis of their performance. The study provided evidence of the generalizability of the gaze bias effect in the decision-making process.
In [53] authors developed an online hypothesis testing platform for experimental research in cartography and psychological diagnostics.The platform consists of various modules for creation of tasks, management of test and users and allows export of raw data for analysis. The tool can be used for collaborative tasks and can be integrated with eye tracking systems. However, this tool does not measure cognitive parameters of the users. In another research work [29], pilot study by was conducted to investigate the problem-solving behaviors of learners with different levels of expertise in three disciplines of science (biology, chemistry, and physics). The study provides enough evidence to prove that eye tracking can distinguish different levels of expertise across disciplines. In the work [54], eye movements were used to investigate the bug fixing task. The participants were grouped as experts and novices. The results indicate that programmer code reading behaviour measured using eye tracking, at line level and element level can used to differentiate the participant expertise.
The related works in eye tracking indicate that eye gaze can reveal learner behaviour during the assessment and give insight into their domain knowledge and decision-making process. Most studies use eye tracking to differentiate the learner’s expertise and understand reading behaviour and problem-solving skills. However, few studies use eye tracking technology for online evaluation and provide feedback on what factors affected the learner performance.

3. Intelligent Eye Gaze Learner AsSEssment System-IEyeGASE

The related works indicate that eye tracking technology can provide insights into learners’ learning behavior and decision-making process. However, a few studies have used it to develop a system to assess the learner and provide personalized feedback.To understand the learner’s cognitive and visual behavior during the assessment and provide deeper insights into the performance of learners, we developed an intelligent learner assessment system called Intelligent Eye Gaze-based learner AseSEsment (IEyeGASE) system. The IEyeGASE system tracks the eye movements of the learners while taking an online objective assessment and provides personalized feedback on their performance in the test based on various hypothesis and cognitive parameters. An online assessment system usually provides a score to quantify the performance of the learner. The proposed learner assessment system will provide a deeper understanding of what cognitive factors influenced the learner’s performance and provide personalized feedback. Cognitive parameters like wavering behaviour, confusion, engagement in task, findability, knowledge acquisition etc., provide s with detailed insights into learners’ performance that can be used to recalibrate learning goals, and help learners take corrective actions. A personalized feedback is provided for every objective question attempted by the learner. Figure 1 represents various modules of the proposed learner assessment system. It has the following modules:
  • Objective Assessment
  • Data Collection
  • Low Level Feature Extraction
  • High Level Feature Extraction
  • Intelligent Eye gaze AsSEment system -IEyeGASE
  • Data Visualization
The objective assessment system provides an interface for the learner to interact with the learner assessment system. The data collection module collects the learner’s raw eye movements while the learner interacts with the system. The low level and high level feature extraction modules generate the features from raw eye movements. These features are mapped to various cognitive parameters and hypothesis discussed in Section 3.6. Finally, the IEyeGASE system uses these high level features to analyze learner performance and provide the personalized feedback. The different modules of the learner assessment system are discussed in detail in the following subsections.

3.1. Objective Assessment

The objective-based assessment module is the interface to display the learner with the objective-based assessment questions. In this research work, multiple-choice questions(MCQs) are used for assessing the performance of the learners. It is designed using the SMI Experiment suite 360° [55]. The interface displays the MCQs discussed in Section 4.2 one after the other. The learner response is manually recorded using the Think Aloud method [56], where the learner answers the question loud enough for the observer to record the information. A binary score of 1 or 0 was given to right and wrong answers, respectively.

3.2. Data Collection

The data collection module records the raw eye gaze data using SMI Redn Professional [57] eye tracker while the learner is being assessed. The raw sensory data consists of various fields like participant details, calibration details, calibration area, system details, timestamp, trial number, gaze position, pupil position, pupil diameter, and quality values. Gaze position is the coordinate points inside the calibration area at which the user gazed. In this study, the calibration area is the screen area. For eye gaze data analysis, the data points of interest are timestamp and gaze position represented using Raw X and Raw Y coordinates. The raw data is an IView Data File (IDF) that is further converted to a text file using an IDF converter tool provided by SMI manufactures.

3.3. Low Level Feature Extraction

The low level feature extraction module generates various events from the raw eye movements data. Eye movements are represented in terms of low-level features like fixations and saccades. Fixations are the fixed gazes over the informative region of interest, and saccades are rapid eye movements between fixations. The information is perceived by the brain only during fixation. The low-level feature extraction translates the raw eye movements data into fixation and saccades. This reduces the complexity of analyzing eye movement data and retains the cognitive and visual behavior characteristics. In the research work [58], authors have proposed numerous algorithms for fixation identification like I-VT, I-DT, I-HMM, etc. I-VT is the velocity threshold algorithm that separates fixations from saccades based on point-to-point velocities. I-HMM uses Hidden Markov Models to determine fixations, and I-DT is the Dispersion Threshold algorithm that identifies fixations as a group of consecutive eye gaze points within a given dispersion.
In our research work, we used the I-DT algorithm for fixation identification. A fixation was identified when eye gaze lasted for at least 100 milliseconds with a maximum dispersion of 100 pixels; else, it was separated as saccades. If the gaze data is zero, then it is marked as a blink. An open source eye tracking toolbox called PyGaze [59] was used for identifying events from the raw eye movements data. The module is schematically represented in the Figure 2. It has two sub modules- Read Data and Event Detector. The Read Data module scans the raw eye gaze data and creates a dictionary of information comprising time, trial number, and gaze points. The Event Detector uses the I-DT algorithm to identify fixations and saccades.

3.4. High Level Feature Extraction

The high level feature extraction module extracts features that indicate various cognitive and visual behavior of the learner. The  admin (using the system for assessment) provides the details of the information of interest to the module as a text file. The information of interest is also known as Area of Interest (AOI). To understand deeper insights into the cognitive behavior of the learners, we identified AOIs in the MCQ. An MCQ has six AOI regions-question (Q), the keyword (K), and four options (A, B, C, D) as represented in Figure 3. The calibration area is the complete screen area. The stimulus area contains the MCQ, and the whitespace is the area with no information. The keyword is the crucial clue in the question that leads to the selection of correct options. The AOI details for extraction of high level features are AOI name, start X and Y pixels, width, and height of the AOI region.
The calibration area is distributed as follows: mean of the correct options (MeanCorrect = 14,760 pixels; MeanCorrect = 4.09% of stimulus area) was comparable with the incorrect options (MeanIncorrect = 21,271 pixels; MeanIncorrect = 6.05% of the stimulus area). The total stimulus area MeanStimulus = 41,4962 pixels and MeanStimulus = 39% of the total screen area. The questions had a MeanQuestion = 76,386 pixels and MeanQuestion = 18.19% of the stimulus area. All the fixations outside the stimulus area (whitespace) was excluded from data analysis.
In our research work, we are interested in the information perceived by the learners provided by fixation-related features. Commonly used fixation-related features are fixation duration, fixation count, and time to the first fixation [22,60]. To capture how quickly the information is perceived, the time to first fixation metric is used. Fixation count indicates how many times the learner has viewed the information of interest. Fixation duration indicates how long they have viewed the information of interest.
A schematic representation of high level feature extraction module is shown in Figure 4. The high level feature extraction module has two sub modules; Gaze Estimator and String Generation Engine. The Gaze Estimator uses the AOI information provided by the tutor to extract fixation-related features at each AOI. The time spent on each information of interest is represented as Percentage Gaze Duration, computed based on Equation (1). The String Generation Engine generates a scanpath string based on the sequence in which the learner visited the AOIs. For example, if the learner has gazed at the question, the keyword then options A, B, A, and then C, then the scanpath string is QKABAC. A learner can gaze in the same AOI multiple times; a collapsed scanpath string is generated by eliminating the repeated characters. For example, if the scanpath string of the learner is QQQQQKKABAABCCDAQQ then the collapsed scanpath string is QKABABCDAQ. Scanpath string provides information about how learners processed the information, what is perceived, and what is missed. The scanpath string and percentage gaze duration are the two high level features used in our study for analyzing learner performance.
% G a z e D u r a t i o n A O I x = F i x a t i o n D u r a t i o n x F i x a t i o n D u r a t i o n
where x = {Q, K, A, B, C, D}.

3.5. Data Visualization

Data visualization module helps in the quick analysis of the data. PyGaze provides various methods to visualize the eye movement data like Fixation Map, Heatmap, and Scanpath. Fixation maps are all the fixations of the learner over the stimuli. They are represented using circles. The longer the fixation, the larger the circle. Scanpath is a sequence of fixations and saccades and described using circles and lines or arrows. They provide details of the sequence of gaze over the stimuli. Heatmaps are the most common data visualization that provides the most and least attention of learners over the stimuli.
Scanpath provides only the details of the sequence of fixations and saccades. It does not give the details with context. To interpret the gaze path of the learner, we developed a gaze sequence plot derived from a scanpath very similar to the work in [61]. The gaze sequence plot provides details in terms of the information of interest, as shown in Figure 5. It represents the sequence of AOIs visited over time by the learner.

3.6. Intelligent Eye Gaze Learner AsSEssment System-IEyeGASE

The IEyeGASE system provides fine-grained insights into the eye gaze patterns of the learners in an online assessment for assessing their knowledge level at the basic level of Bloom’s Taxonomy called Remember or Recall (Section 4.1). The objective-based online assessment measures the learner’s domain knowledge and how this knowledge leads to choosing the correct option. Eye tracking provides an insight into what is the cognitive process associated with selecting the correct options. The visual and cognitive process of learners during the assessment are indicated by various hypothesis and cognitive parameters like hypothesis of inattentional blindness, wavering behavior, hypothesis of knowledge acquisition, confidence level, engagement and findability. The IEyeGASE system models these cognitive parameters and hypothesis to provide personalized feedback on learner performance. The following subsections describe in detail these cognitive parameters.

3.6.1. Knowledge Acquisition Hypothesis (KA)

Objective-based assessment are used to measure the domain knowledge of the learner. The learners with high MCQ scores have a higher levels of domain knowledge, and learners with low MCQ scores have low domain knowledge. Learners with a high level of knowledge acquisition also called as performing learners, fixate more on correct options than learners with insufficient domain knowledge acquisition called as underperforming learners. This hypothesis investigate the bias to correct the option. It is modeled as a linear regression model represented in Figure 6. The %Gaze Duration on all AOIs are the inputs for model construction, and the actual response (1 = Correct option chosen, 0 = Incorrect option chosen) was used as the input variables. The precise response is collected using the Think Aloud method. The learner thinks aloud the answer(or response) that the observer of the experiment manually records. The predicted values are between 0 and 1 and indicate the level of knowledge acquired by the learner. The predicted values are grouped into three levels, namely None [0–0.3], Partially [0.4–0.5], and Fully [0.6–1]. None means not acquired, Partially means the partial acquisition, and Fully means fully acquired knowledge.

3.6.2. Hypothesis of Inattentional Blindness (IB)

Inattentional blindness or perceptual blindness was observed [62] while conducting experiments on perception and attention experiments. It says that an individuals fails to perceive some part of the stimulus even if they have looked at it. The hypothesis will investigate whether the learner missed important information i.e, keywords, while scanning a question. The tendency to miss the keywords is formulated using the Equation (2). A learner who performs well attend on an average 9% of total gaze duration on keywords. (MCorrect = 9.13 SDCorrect = 9.36, MIncorrect = 13.55 SDIncorrect = 14.81). The threshold value was set to a lower value 5%. A learner is inattentional blind to keywords when the percentage gaze duration on keyword is less than or equal to 5%.
I B = Y E S % G a z e T i m e k 5 N O o t h e r w i s e
where k is keyword and % G a z e T i m e k is the percentage gaze duration on keyword.

3.6.3. Wavering Behavior (WB)

In decision-making literature, subjective preferences are driven by Gaze Bias Effect [4]. Learners will be biased to multiple options and have several revisits before deciding on the correct choice. This characteristic of learners is called Wavering Behavior—the gaze shifts between the most likely options with intermediate visits to questions. Wavering Behavior also leads to confusion and answering incorrect options. It is measured by searching for repeated patterns in the scanpath string. Any repetitive patterns between correct option and incorrect option and question imply a WB. For example, for the stimulus in Figure 3, the correct option is a, and incorrect options are b, c, and d. Thus, a repetitive pattern like ACACAQAC shows a wavering behavior between correct option a and incorrect option c. To identify the wavering behavior of learners, an Algorithm 1 is developed that takes the AOI string and %Gaze Duration on all AOIs as input. Line 1–2 generates the collapsed string as discussed in the high level feature extraction section and generated all repetitive substrings (we only considered the substrings with length 2). The number of times the substrings are repeated is computed. In line 3 the maximum repeated substring is identified. Line 4–6 computes the time spent by the learner while gazing between two options in MCQ computed as the sum of all combinations of AOIs. The AOIs of the maximum computed value are identified. This provides information about the two options where the learner has gazed maximum. Line 7 checks if a wavering behavior is observed in the learner. This is based on three conditions:
  • If the correct option is in the maximum repeated substring
  • If the correct option is the AOIs where the learner has spent maximum time.
  • If the AOIs with maximum sum is equal to the most repeated substring
The learner may also have repeated gaze among two incorrect options; in such cases, we assume that the learner’s knowledge in the concept is poor.
Algorithm 1: Algorithm for Wavering Behavior
Result: Wavering Behavior
Input: AOIString, %Gaze Duration on each AOI, CorrectOption
  • Generate collapsed string
  • Generate all repetitive substrings and compute their repetitive counts
  • Identify the maximum repeated substring
  • For all maximum combinations of AOIs compute the sum
  • Identify maximum sum from all combinations
  • Identify the AOI combination with maximum sum
  • Wavering Behavior is observed if the following conditions are met
    (a)
    CorrectOption in the maximum repeated string
    (b)
    Correct Option in AOI combination with maximum sum
    (c)
    AOI combination with maximum sum equals maximum repeated substring

3.6.4. Learner Confidence Level Hypothesis

Learner competence is mostly inferred by the correct percentage score obtained in an MCQ assessment. The lack of confidence in the correct option is seldom evaluated. Learners may have gaze bias to the correct option but finally chooses the wrong choice. Eye tracking technology can provide an insight into the confidence level of learners on correct options. The learner may gaze at the correct option for a long time but end up choosing the incorrect option. The hypothesis is measured by measuring the fixation duration on the correct option versus the incorrect option. For example, a learner must have gazed at the correct option a in Figure 3, but end up choosing option c. This can be due to the low confidence in the correct answer.

3.6.5. Findability and Engagement

Other Cognitive parameters of interest are findability and engagement. Findability is the quickness in selecting the correct option. It is computed based on the response time of the learner. Engagement is the time spent by the learner in completing the task. The stimuli have two parts, the Stimulus area, and Whitespace area, as represented in Figure 3. For example, if the learner spends 80% of total gaze time on the Stimulus area, the learner is engaged the task.

4. Experimental Design

This section describes the experimental design for developing the intelligent system like Bloom’s Taxonomy, experiment procedure, apparatus, participants details etc.

4.1. Bloom’s Taxonomy

Bloom’s Taxonomy is a hierarchical way of classifying different levels of thinking and applied to design course objectives. The course objectives provide details of expectations from learners during the end of the course [36,63]. Bloom’s Taxonomy proposes six levels of thinking-Remember, Understand, Apply, Analyze, Evaluate, Create.
In [63], authors describe various levels of Bloom’s Taxonomy to differentiate between cognitive skill levels and how these levels can lead to deeper learning and transfer of knowledge and skills to a greater variety of tasks and contexts. Remember, the first level of thinking leads to developing skills crucial for completing the pedagogical process. It is about recalling the concepts from memory. Understand is the second level where the learner explains ideas and concepts, discusses and describes them, and translates the points somehow. In the Apply level, the information learned is applied in new situations to solve a problem. Critical thinking happens in the Analyze level, where they can distinguish between fact and opinions and breaks down the information into smaller components. In the Evaluate state, the learner justifies a decision through thoughts based on the knowledge acquired. The topmost level is Create, where learners produce new creative ideas of their own in solving problems. This framework provides the ability to tutors to create achievable learning goals and help develop plans to meet them. It ensures learners demonstrate the cognitive skills in solving each problem. Evaluators can apply this taxonomy by asking questions in the form of MCQs to correlate with specific learning goals at the basic level of Remember and Understand.
In our research work, the focus is on the basic level of thinking; Remember, this level is crucial for laying a solid foundation for learning. At this level, learners memorize the facts and concepts and recall when required. All the materials used in the study were discussed with the tutor, and MCQs were defined to achieve the goal of understanding the basic level of learning, Remember.

4.2. Materials

The objective-based assessment used in the present study consisted of five MCQs based on Object-Oriented Programming with Java [64]. Since the objective is to test the lower level of Bloom’s Taxonomy, topics include keywords, operators, decision-making, loops, constructors, object creation, and initialization in Java. All the MCQs are oriented towards recall from long-term memory. This means, no logical or problem-solving questions were tested. Each MCQ is a short question with a keyword that can provide a clue to the correct option. Each MCQ comprises four alternatives, namely the correct choice and incorrect options that are distractors. The correct option implies that the concept is learned. The distractors are closely associated with the correct option and creates confusion in learners’ minds. The order of the presentation of options is not of interest in the design of the present study. The following are the MCQs with keywords that provides valuable information towards the correct choice and various answer options:
  • MCQ1: Which keyword is used by method to refer to the object that invoked it?
    Keyword: object that invoked it
    Options
    • import (Incorrect)
    • this (Correct)
    • catch (Incorrect)
    • super (Incorrect)
    Concept Learned: Object creation and initialization
  • MCQ2: List the arithmetic operators in increasing order of precedence.
    Keyword: increasing order of precedence
    Options
    • * + % - / (Incorrect)
    • * / - + % (Incorrect)
    • * / % + - (Correct)
    • * % / + - (Incorrect)
    Concept Learned: Operators in Java
  • MCQ3: Which of the following are keywords in Java?
    Keyword: keywords
    Options
    • while, switch, if, static, bool (Incorrect)
    • while, switch, Boolean, static, pack(Incorrect)
    • break, catch, ball, return, switch (Incorrect)
    • while, switch, break, Boolean, catch (Correct)
    Concept Learned: Keywords in Java
  • MCQ4: Which keyword is used to invoke base class constructor?
    Keyword: invoke base class constructor
    Options
    • this (Incorrect)
    • import (Incorrect)
    • refer (Incorrect)
    • super (Correct)
    Concept Learned: Constructor
  • MCQ5: Which of these jump statements can skip processing remainder of code in its body for a particular iteration?
    Keyword:particular iteration
    Options
    • continue (Correct)
    • return (Incorrect)
    • break (Incorrect)
    • exit (Incorrect)
    Concept Learned: Decision Making and Loops

4.3. Participants

Fifteen learners from the Amrita Vishwa Vidyapeetham University, Bengaluru, India volunteered for the experiment. The learners comprised graduates from the department of computer science and engineering undergoing a course in Object-Oriented programming using Java. The learners either had a prior knowledge of C and C++ programming or no prior programming knowledge. There were three female and twelve male participants, and their average age was 19 years. The learners had normal or corrected vision. The participants were detailed on the study procedure and intimated that they could abandon the study anytime. The total samples collected are 75 (15 learners and 5 MCQs).

4.4. Apparatus

Eye movements were recorded using a remote, video-based Senso Motoric Instruments (SMI) REDn Professional eye tracker [65], with frequency 60 Hz. The study was conducted in the eye tracking lab and learners were tested individually. The screen resolution is 1366 × 768 pixels. The experiment was set up using the software SMI Experimental Suite 360°. Each MCQ were presented on a single screen. The learners were seated at a distance of 60–70 cm from the monitor. The eye tracking system was calibrated on a 9-point calibration image. In case, the calibration accuracy is below 0.7 degrees of visual angle, the observer requested recalibration.

4.5. Procedure

The experimental study was conducted in the eye tracking lab at Amrita Vishwa Vidyapeetham university. Time slots were provided to the volunteers of the study. After signing informed consent, learners were briefed on the do’s and dont’s of the eye tracking study. Next the eye tracking equipment was calibrated and after successful calibration, learners were once more reminded to think aloud the answers and the begin the assessment. The learners worked at their own pace. There was no option for moving to previous MCQ and to move to next question, they pressed enter key. All the MCQs were shown in the same order to all learners. Learners were rewarded with a small treat [fruit juice and chocolate].

4.6. Data Analyses

The test score of the learner were manually recorded in a spread sheet. All the statistical analysis were conducted in R 3.5 version. PyGaze analyzer was used to detect low level features and high level features of the learner system. The intelligent IEyeGASE system was developed in Python 3.7.

5. Results

The IEyeGASE system provides deeper insights into the learner’s performance for each question in the objective-based assessment. The results of each hypothesis are discussed in this section. Table 1 shows the list of learners who answered the questions correctly (performing learners) and incorrectly (underperforming learners) based on the response for each MCQs. The tags S1, S2, etc., represent the learner ID. The table represents learners performance for each MCQs. The column Correct Option is the answer to the MCQs discussed in the Section 4.2. The columns Learners Answered Correctly and Learners Answered Incorrectly list the learner ID of learners who got the MCQ right and wrong, respectively. Each question aims at evaluating different concepts discussed in the material section. The following subsection discusses the results of the IEyeGASE system.

5.1. Hypothesis of Inattentional Blindness (IB)

The Welch two-sample t-test reveals no significant differences in the gaze data on keywords and correctly choosing the option for performing and underperforming learners. Table 2 shows the results of the test. However, a paired t-test between the percentage gaze duration on keyword and correct option shows significant differences for MCQ2, MCQ3, and MCQ4 as represented in Table 3. It implies that gazing at keywords leads to the correct option. The heat maps in Figure 7 and Figure 8 distinguish the inattention blindness of an underperforming learner and performing learners. Figure 7 shows that the fixation of an underperforming learner S6, on the keywords “object that invoked it” is low, and the option chosen by the learner was an incorrect option a import. This indicates that the learner failed to perform as the critical clue in the question was missed. Figure 8 shows that the fixation on performing learner S1, on the keywords “object that invoked it” was high and that leads to the fixation on the correct option b. this. The statistical analysis and the visualizations clearly show that gazing at important clues, leads to selecting the correct option. The IEyeGASE system predicted 36% of the learners had inattentional blindness. 16% of the learners answered the question wrong due to inattentional blindness.

5.2. Wavering Behaviour

A wavering behavior is observed when learners are not sure which option to choose, and they have a repetitive gaze between the different options. The alternative options in MCQs are distractors introduced to investigate confusions in the mind of learners. Figure 9 demonstrates the gaze wavering of an underperforming learner between options a and option b for MCQ1. The correct option is option b. The learner ended up in selecting the incorrect option. The results indicate confusion of options in underperforming learners. A wavering behavior was observed in performing learner for MCQ 2 between incorrect option d and correct option c as shown in Figure 10. The gaze duration was more on option c, and the learner ended up selecting the correct option. This can also be a mere guess. Based on the %Gaze duration and AOI string, the IEyeGASE system predicted wavering behavior in 17% learners and 8% of them failed to perform, and 9% of them guessed the correct answer.

5.3. Knowledge Acquisition Hypothesis

Table 4 represents the number of samples of learners performing and underperforming categorized in different classes of KA. The IEyeGASE linear regression model predicts the knowledge acquisition of learners as Fully, Partially, and None. Considering Fully and None classes, the regression model produces an accuracy of 76% when compared to the ground truth response value. When further analyzed, out of the nine underperforming learners where the model predicted as Fully acquired, we observed three learners with WB; three learners lacked confidence in answering the correct option, one learner with less engagement on stimuli, and others unknown reason. Similarly, while analyzing the eye gaze data of 6 performing learners, the IEyeGASE linear regression model predicted as None acquired, all the learners were quick at task and hence the time spent on the correct option was less. The underperforming learners who partially acquired knowledge, one learner had WB, two learners did not engage in the task, one learner with inattentional blindness, and one learner lacked confidence. These insights were provided to the tutor for further validation and feedback to learners for improvement.

5.4. Findability and Engagement

The findability and engagement of the learners in each task were analyzed. The IEyeGASE system predicted learners S3, S5, S9 and S14 quick at different tasks. S3 was fast at MCQs 1 and 3, S5 at MCQs 3 and 4 and S9 at MCQ 3. 26 learner samples were predicted not engaged in the task. The learners were either quick at the task or not interested in the task or gazed at whitespaces than the stimulus area Figure 11 represents the engagement (YES and NO) of learners in different KA classes. The graph shows that to complete a task, engagement is an essential factor. We also found that learners who are categorized as KA None too were engaged in the task. They might be trying to recollect the answers from memory.

5.5. Confidence Level Hypothesis (CL)

The learners can have gaze biased to the correct option but chooses an incorrect option. Figure 12 represents a heat map of an underperforming learner whose maximum gaze is on option D, which is a correct option but fails to answer. This implies that to choose the correct option, the learner should know and the confidence to select it. The IEyeGASE system predicted three learners exhibiting a lack of confidence S8, S13, and S15. S8 on MCQ 2, S13 on MCQs 3 and 5, and S15 on MCQ 2.

6. Discussion

IEyeGASE system offers similar functionalities present in an online assessment system. However, several features differentiate it from other assessment systems. A comparison with traditional pen and paper assessment and online assessment platforms can be seen in Table 5. All assessment system provides a score that quantifies the performance of the learner. The online assessment system and IEyeGASE system provides an easy to use interface to visualize the personalized results. In addition, both these systems do not require a dedicated resource like a tutor or evaluator to interpret the results. Response time and engagement in the task in the case of online assessment are based on how quick and how long it took to select an option. In the case of the IEyeGASE system, it is based on how quickly the learner gazed at the option and how long on the AOIs.
The IEyeGASE system provides a personalized feedback on the performance of the learner. Figure 13 shows personalized feedback of a poor learner for the concept “Object creation and initialization”. The learner missed the information “object that invoked it” and hence failed to answer the correct option. Figure 14 shows the personalized feedback of a good learner also exhibiting a wavering behavior between options a. import and b. this. The visualization shows the learner gazed at options a and b multiple times while also gazing at keyword and question. The personalized feedback helps the learner and the evaluator get better insights into the performance even though the score reveals the learner as a good learner.
Table 6 summarizes the more profound insights into the various aspects of learners for each MCQ. The traditional pen and paper approach or online assessment fails to bring these insights into learners’ performance. The IEyeGASE system provides detailed feedback on learners’ performance that can be used by tutor to improve the quality of education and redefine the learner goals. For example, learner S1 performed well for MCQs 1, 2, 3 and 5. A wavering behavior was observed while answering MCQs 4 and 5. The wavering behavior resulted in not performing for MCQ 4. In case of MCQ 5, the learner was confused between options, this indicate that the learner needs to redo the concept. In another example, learner S6 was lucky enough to answer the MCQs 3 and 5. However, the learner did not perform for other MCQs and ignored critical information in the questions. The engagement in the task was poor for MCQs 3, 4 and 5.
The learners S1, S3, and S11 were females, and others were males. The study [66] investigates the difference in viewing of male and female participants. The results indicate females showed more exploratory gaze behavior and short fixation ratios. A quick analysis of the results of the IEyeGASE system indicates a similar behavior in female learners. They were primarily engaged in the task and especially found S3 quick with short fixations on options. Male learners, especially S8, S13 and S15, exhibited low confidence in answering the correct option. An extensive analysis of the differences in learner behavior of female and male learners is not within the scope of the present study.
The tutor can use the insights from the personalized feedback to understand why the learner failed to perform or what factors affected their performance while trying to recall information from memory. The present system focused on predicting learners’ performance in the lower level of Bloom’s Taxonomy. We plan to extend the work in understanding the parameters affecting critical thinking in the higher levels of Bloom’s Taxonomy.

Limitations

The limitation of the IEyeGASE system is that the system provides personalized assessment for independent tasks and does not provide continuous monitoring of learner’s progress. Instead, the system provides feedback to tutors and learners to help them recalibrate their learning goals. Another limitation is, the system only works with objective assessment. However, cognitive parameters like engagement and inattentional blindness can be extended to subjective assessments.
The current IEyeGASE system does not provide insights into cognitive parameters related to higher-level thinking of Bloom’s Taxonomy. However, the parameters discussed are also applicable for other levels. Further studies will incorporate specific cognitive parameters that are indicators for higher levels of thinking.
Data analysis is limited to eye gaze data; the other data sources like the audio, mouse, video are not considered.
The present study is conducted on a limited number of learners and will be extended to a large number of learners.

7. Conclusions

The assessment system has mostly moved from the traditional pen and paper to objective-based online assessment. The conventional methods of evaluation provide feedback on performance based on the score obtained. What is perceived and what affected the performance of learners is crucial for tutors to plan for lessons. An intelligent learner assessment system called IEyeGASE was developed; it uses eye gaze data to quantify the factors affecting the learner’s performance. The IEyeGASE system evaluates the learner on the low level of Bloom’s Taxonomy, Remembrance. The various models of the IEyeGASE system provide feedback based on the cognitive parameters measured like engaged in a task, quickness, knowledge acquisition, confidence level, inattentional blindness, and wavering behaviour. The learner can use the feedback to learn better and tutors to improve the quality of learning and redefine the learning goals. The IEyeGASE system can be easily integrated into existing digital evaluation systems to provide deeper insights into the learner’s performance. In the future, the intelligent assessment system will be extended to understand and quantify the cognitive parameters affecting critical thinking in the higher levels of Bloom’s Taxonomy.

Author Contributions

All authors have equally contributed to this research work. All authors have read and agreed to the published version of the manuscript.

Funding

Not Applicable.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Ethical Committee of Amrita Vishwa Vidyapeetham.

Informed Consent Statement

Written consent was obtained from all learners involved in the study.

Data Availability Statement

Not Applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dolin, J.; Black, P.; Harlen, W.; Tiberghien, A. Exploring relations between formative and summative assessment. In Transforming Assessment; Springer: Cham, Switzerland, 2018; pp. 53–80. [Google Scholar]
  2. Dirkx, K.; Skuballa, I.; Manastirean-Zijlstra, C.; Jarodzka, H. Designing computer-based tests: Design guidelines from multimedia learning studied with eye tracking. Instr. Sci. 2021, 49, 589–605. [Google Scholar] [CrossRef]
  3. Tsai, M.J.; Hou, H.T.; Lai, M.L.; Liu, W.Y.; Yang, F.Y. Visual attention for solving multiple-choice science problem: An eye-tracking analysis. Comput. Educ. 2012, 58, 375–385. [Google Scholar] [CrossRef]
  4. Lindner, M.A.; Eitel, A.; Thoma, G.B.; Dalehefte, I.M.; Ihme, J.M.; Köller, O. Tracking the decision-making process in multiple-choice assessment: Evidence from eye movements. Appl. Cogn. Psychol. 2014, 28, 738–752. [Google Scholar] [CrossRef]
  5. Socrative. Available online: https://www.socrative.com/ (accessed on 23 September 2021).
  6. Mentimeter Education. Available online: https://www.mentimeter.com/solutions/education (accessed on 23 September 2021).
  7. Haridas, M.; Gutjahr, G.; Raman, R.; Ramaraju, R.; Nedungadi, P. Predicting school performance and early risk of failure from an intelligent tutoring system. Educ. Inf. Technol. 2020, 25, 3995–4013. [Google Scholar] [CrossRef]
  8. Predictive Cognitive Assessment. Available online: https://www.predictiveindex.com/assessments/cognitive-assessment/ (accessed on 20 September 2021).
  9. the MCQUAIG Mental Agility Test. Available online: https://mcquaig.co.uk/psychometric-system/tools/mcquaig-mental-agility-test/ (accessed on 23 September 2021).
  10. Heeneman, S.; Schut, S.; Donkers, J.; van der Vleuten, C.; Muijtjens, A. Embedding of the progress test in an assessment program designed according to the principles of programmatic assessment. Med. Teach. 2017, 39, 44–52. [Google Scholar] [CrossRef] [Green Version]
  11. Bombeke, K. Early sensory attention and pupil size in cognitive control: An EEG approach. Ph.D. Thesis, Ghent University, Ghent, Belgium, 2017. [Google Scholar]
  12. Cuesta-Cambra, U.; Niño-González, J.I.; Rodríguez-Terceño, J. The Cognitive Processing of an Educational App with EEG and’Eye Tracking’. Comun. Media Educ. Res. J. 2017, 25, 41–50. [Google Scholar] [CrossRef] [Green Version]
  13. Keskin, M.; Ooms, K.; Dogru, A.O.; De Maeyer, P. Exploring the cognitive load of expert and novice map users using EEG and eye tracking. ISPRS Int. J. Geo-Inf. 2020, 9, 429. [Google Scholar] [CrossRef]
  14. Nikolaev, A.R.; Meghanathan, R.N.; van Leeuwen, C. Combining EEG and eye movement recording in free viewing: Pitfalls and possibilities. Brain Cogn. 2016, 107, 55–83. [Google Scholar] [CrossRef]
  15. Just, M.; Carpenter, P. Eye fixations and cognitive processes. Cogn. Psychol. 1976, 8, 441–480. [Google Scholar] [CrossRef]
  16. Navya, Y.; SriDevi, S.; Akhila, P.; Amudha, J.; Jyotsna, C. Third Eye: Assistance for Reading Disability. In International Conference on Soft Computing and Signal Processing; Springer: Singapore, 2019; pp. 237–248. [Google Scholar]
  17. Paulson, E.J.; Henry, J. Does the Degrees of Reading Power assessment reflect the reading process? An eye-movement examination. J. Adolesc. Adult Lit. 2002, 46, 234–244. [Google Scholar]
  18. Rayner, K. Eye movements in reading and information processing: 20 years of research. Psychol. Bull. 1998, 124, 372. [Google Scholar] [CrossRef]
  19. Rayner, K.; Pollatsek, A. Eye movement control during reading: Evidence for direct control. Q. J. Exp. Psychol. 1981, 33, 351–373. [Google Scholar] [CrossRef] [PubMed]
  20. Rayner, K.; Chace, K.H.; Slattery, T.J.; Ashby, J. Eye movements as reflections of comprehension processes in reading. Sci. Stud. Read. 2006, 10, 241–255. [Google Scholar] [CrossRef]
  21. Chandrika, K.R.; Amudha, J. An eye tracking study to understand the visual perception behavior while source code comprehension. In Proceedings of the Second International Conference on Sustainable Computing Techniques in Engineering, Science and Management, Belgaum, India, 27–28 January 2017. [Google Scholar]
  22. Chandrika, K.R.; Amudha, J.; Sudarsan, S.D. Recognizing eye tracking traits for source code review. In Proceedings of the 2017 22nd IEEE International Conference on Emerging Technologies and Factory Aautomation (ETFA), Limassol, Cyprus, 12–15 September 2017; pp. 1–8. [Google Scholar]
  23. Radach, R.; Kennedy, A. Theoretical perspectives on eye movements in reading: Past controversies, current issues, and an agenda for future research. Eur. J. Cogn. Psychol. 2004, 16, 3–26. [Google Scholar] [CrossRef]
  24. Jacob, R.J.; Karn, K.S. Eye tracking in human-computer interaction and usability research: Ready to deliver the promises. In The Mind’s Eye; Elsevier: North-Holland, The Netherlands, 2003; pp. 573–605. [Google Scholar]
  25. Gautam, G.; Sumanth, G.; Karthikeyan, K.; Sundar, S.; Venkataraman, D. Eye movement based electronic wheel chair for physically challenged persons. Int. J. Sci. Technol. Res. 2014, 3, 206–212. [Google Scholar]
  26. Chandrika, K.R.; Amudha, J. A fuzzy inference system to recommend skills for source code review using eye movement data. J. Intell. Fuzzy Syst. 2018, 34, 1743–1754. [Google Scholar] [CrossRef]
  27. Harada, H.; Nakayama, M. Estimation of reading ability of program codes using features of eye movements. ACM Symp. Eye Track. Res. Appl. 2021, 32. [Google Scholar] [CrossRef]
  28. Hegarty, M.; Mayer, R.E.; Green, C.E. Comprehension of arithmetic word problems: Evidence from students’ eye fixations. J. Educ. Psychol. 1992, 84, 76. [Google Scholar] [CrossRef]
  29. Tai, R.H.; Loehr, J.F.; Brigham, F.J. An exploration of the use of eye-gaze tracking to study problem-solving on standardized science assessments. Int. J. Res. Method Educ. 2006, 29, 185–208. [Google Scholar] [CrossRef]
  30. Calvi, C.; Porta, M.; Sacchi, D. e5Learning, an e-learning environment based on eye tracking. In Proceedings of the 2008 Eighth IEEE International Conference on Advanced Learning Technologies, Santander, Spain, 1–5 July 2008; pp. 376–380. [Google Scholar]
  31. Chen, S.C.; She, H.C.; Chuang, M.H.; Wu, J.Y.; Tsai, J.L.; Jung, T.P. Eye movements predict students’ computer-based assessment performance of physics concepts in different presentation modalities. Comput. Educ. 2014, 74, 61–72. [Google Scholar] [CrossRef]
  32. Barrios, V.M.G.; Gütl, C.; Preis, A.M.; Andrews, K.; Pivec, M.; Mödritscher, F.; Trummer, C. AdELE: A framework for adaptive e-learning through eye tracking. Proc. IKNOW 2004, 609–616. [Google Scholar]
  33. Narayanan, R.; Rangan, V.P.; Gopalakrishnan, U.; Hariharan, B. Multiparty gaze preservation through perspective switching for interactive elearning environments. Multimed. Tools Appl. 2019, 78, 17461–17494. [Google Scholar] [CrossRef]
  34. Nugrahaningsih, N.; Porta, M.; Ricotti, S. Gaze behavior analysis in multiple-answer tests: An Eye tracking investigation. In Proceedings of the 2013 12th International Conference on Information Technology Based Higher Education and Training (ITHET), Antalya, Turkey, 10–12 October 2013; pp. 1–6. [Google Scholar]
  35. Ujbanyi, T.; Katona, J.; Sziladi, G.; Kovari, A. Eye-tracking analysis of computer networks exam question besides different skilled groups. In Proceedings of the 2016 7th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Wroclaw, Poland, 16–18 October 2016; pp. 000277–000282. [Google Scholar]
  36. Anderson, L.W.; Sosniak, L.A. Bloom’s Taxonomy; University Chicago Press: Chicago, IL, USA, 1994. [Google Scholar]
  37. Edwards, S.H. Using software testing to move students from trial-and-error to reflection-in-action. In Proceedings of the 35th SIGCSE Technical Symposium on Computer Science Education, Norfolk, VA, USA, 3–7 March 2004; pp. 26–30. [Google Scholar]
  38. Mangaroska, K.; Giannakos, M. Learning analytics for learning design: Towards evidence-driven decisions to enhance learning. In Proceedings of the European Conference on Technology Enhanced Learning; Springer: Cham, Switzerland, 2017; pp. 428–433. [Google Scholar]
  39. Persico, D.; Pozzi, F. Informing learning design with learning analytics to improve teacher inquiry. Br. J. Educ. Technol. 2015, 46, 230–248. [Google Scholar] [CrossRef]
  40. Haag, J.; Witte, C.; Karsch, S.; Vranken, H.; van Eekelen, M. Evaluation of students’ learning behaviour and success in a practical computer networking course. In Proceedings of the 2013 Second International Conference on E-Learning and E-Technologies in Education (ICEEE), Lodz, Poland, 23–25 September 2013; pp. 201–206. [Google Scholar]
  41. Sass, S.; Schuette, K.; Lindner, M.A. Test-takers’ eye movements: Effects of integration aids and types of graphical representations. Comput. Educ. 2017, 109, 85–97. [Google Scholar] [CrossRef]
  42. Haridas, M.; Vasudevan, N.; Gayathry, S.; Gutjahr, G.; Raman, R.; Nedungadi, P. Feature-Aware knowledge tracing for generation of concept-knowledge reports in an intelligent tutoring system. In Proceedings of the 2019 IEEE Tenth International Conference on Technology for Education (T4E), Goa, India, 9–11 December 2019; pp. 142–145. [Google Scholar]
  43. Beck, J.E.; Sison, J. Using knowledge tracing in a noisy environment to measure student reading proficiencies. Int. J. Artif. Intell. Educ. 2006, 16, 129–143. [Google Scholar]
  44. Rus, V.; Lintean, M.; Azevedo, R. Automatic Detection of Student Mental Models during Prior Knowledge Activation in MetaTutor. Int. Work. Group Educ. Data Min. 2009, 21, 169–190. [Google Scholar]
  45. Halszka, J.; Holmqvist, K.; Gruber, H. Eye tracking in Educational Science: Theoretical frameworks and research agendas. J. Eye Mov. Res. 2017, 10. [Google Scholar] [CrossRef]
  46. Richardson, D.C.; Dale, R. Looking to understand: The coupling between speakers’ and listeners’ eye movements and its relationship to discourse comprehension. Cogn. Sci. 2005, 29, 1045–1060. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Balslev, T.; Jarodzka, H.; Holmqvist, K.; de Grave, W.; Muijtjens, A.M.; Eika, B.; van Merriënboer, J.; Scherpbier, A.J. Visual expertise in paediatric neurology. Eur. J. Paediatr. Neurol. 2012, 16, 161–166. [Google Scholar] [CrossRef] [PubMed]
  48. Lachner, A.; Jarodzka, H.; Nückles, M. What makes an expert teacher? Investigating teachers’ professional vision and discourse abilities. Instr. Sci. 2016, 44, 197–203. [Google Scholar] [CrossRef]
  49. Van Gog, T.; Jarodzka, H.; Scheiter, K.; Gerjets, P.; Paas, F. Attention guidance during example study via the model’s eye movements. Comput. Hum. Behav. 2009, 25, 785–791. [Google Scholar] [CrossRef]
  50. Jarodzka, H.; Van Gog, T.; Dorr, M.; Scheiter, K.; Gerjets, P. Learning to see: Guiding students’ attention via a model’s eye movements fosters learning. Learn. Instr. 2013, 25, 62–70. [Google Scholar] [CrossRef]
  51. Kok, E.M.; Jarodzka, H.; de Bruin, A.B.; BinAmir, H.A.; Robben, S.G.; van Merriënboer, J.J. Systematic viewing in radiology: Seeing more, missing less? Adv. Health Sci. Educ. 2016, 21, 189–205. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Busjahn, T.; Tamm, S. A Deeper Analysis of AOI Coverage in Code Reading. ACM Symp. Eye Track. Res. Appl. 2021, 1–7. [Google Scholar]
  53. Šašinka, Č.; Morong, K.; Stachoň, Z. The Hypothesis platform: An online tool for experimental research into work with maps and behavior in electronic environments. ISPRS Int. J. Geo-Inf. 2017, 6, 407. [Google Scholar] [CrossRef] [Green Version]
  54. Aljehane, S.; Sharif, B.; Maletic, J. Determining Differences in Reading Behavior Between Experts and Novices by Investigating Eye Movement on Source Code Constructs During a Bug Fixing Task. ACM Symp. Eye Track. Res. Appl. 2021, 1–6. [Google Scholar] [CrossRef]
  55. EyeLogic. Available online: https://eyelogic.de/smi-compatibility/ (accessed on 24 September 2021).
  56. Van Someren, M.; Barnard, Y.; Sandberg, J. The Think Aloud Method: A Practical Approach to Modelling Cognitive; AcademicPress: London, UK, 1994. [Google Scholar]
  57. IMOTIONS. Available online: https://imotions.com/hardware/smi-red/ (accessed on 24 September 2021).
  58. Salvucci, D.D.; Goldberg, J.H. Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 Symposium on Eye Tracking Research & Applications, New York, NY, USA, 6–8 November 2000; pp. 71–78. [Google Scholar]
  59. Dalmaijer, E.S.; Mathôt, S.; Van der Stigchel, S. PyGaze: An open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments. Behav. Res. Methods 2014, 46, 913–921. [Google Scholar] [CrossRef] [PubMed]
  60. Sharafi, Z.; Shaffer, T.; Sharif, B.; Guéhéneuc, Y.G. Eye-tracking metrics in software engineering. In Proceedings of the 2015 Asia-Pacific Software Engineering Conference (APSEC), New Delhi, India, 1–4 December 2015; pp. 96–103. [Google Scholar]
  61. Dolezalova, J.; Popelka, S. ScanGraph: A novel scanpath comparison method using graph cliques visualization. J. Eye Mov. Res. 2016, 9, 1–13. [Google Scholar]
  62. Mack, A.; Rock, I. Inattentional Blindness: Perception without Attention; MIT Press: New York, NY, USA, 1998. [Google Scholar]
  63. Adams, N.E. Bloom’s taxonomy of cognitive learning objectives. J. Med Libr. Assoc. JMLA 2015, 103, 152. [Google Scholar] [CrossRef]
  64. Computer Science Edu. Available online: https://compsciedu.com/ (accessed on 23 September 2021).
  65. Senso Metric Instruments. Available online: https://gazeintelligence.com/smi-software-download (accessed on 22 September 2021).
  66. Sargezeh, B.A.; Tavakoli, N.; Daliri, M.R. Gender-based eye movement differences in passive indoor picture viewing: An eye-tracking study. Physiol. Behav. 2019, 206, 43–50. [Google Scholar] [CrossRef]
Figure 1. Learner Assessment System.
Figure 1. Learner Assessment System.
Sensors 21 06783 g001
Figure 2. Low Level Feature Extraction.
Figure 2. Low Level Feature Extraction.
Sensors 21 06783 g002
Figure 3. Area of Interest(AOI) representation.
Figure 3. Area of Interest(AOI) representation.
Sensors 21 06783 g003
Figure 4. High Level Feature Extraction.
Figure 4. High Level Feature Extraction.
Sensors 21 06783 g004
Figure 5. Gaze Sequence Plot.
Figure 5. Gaze Sequence Plot.
Sensors 21 06783 g005
Figure 6. Knowledge Acquisition Model.
Figure 6. Knowledge Acquisition Model.
Sensors 21 06783 g006
Figure 7. Heatmap of learner S6 on MCQ1 representing Inattentional Blindness.
Figure 7. Heatmap of learner S6 on MCQ1 representing Inattentional Blindness.
Sensors 21 06783 g007
Figure 8. Heatmap of learner S1 on MCQ1 representing no Inattentional Blindness.More gaze on keyword and correct option.
Figure 8. Heatmap of learner S1 on MCQ1 representing no Inattentional Blindness.More gaze on keyword and correct option.
Sensors 21 06783 g008
Figure 9. Gaze sequence plot representing WB of a underperforming learner where WB is observed between incorrect options.
Figure 9. Gaze sequence plot representing WB of a underperforming learner where WB is observed between incorrect options.
Sensors 21 06783 g009
Figure 10. Gaze sequence plot representing WB of a performing learner where WB is observed between correct and incorrect options.
Figure 10. Gaze sequence plot representing WB of a performing learner where WB is observed between correct and incorrect options.
Sensors 21 06783 g010
Figure 11. Graph representing engagement in task for learners in different KA classes.
Figure 11. Graph representing engagement in task for learners in different KA classes.
Sensors 21 06783 g011
Figure 12. Heat map representing Lack of Confidence by an underperforming learner for MCQ3.
Figure 12. Heat map representing Lack of Confidence by an underperforming learner for MCQ3.
Sensors 21 06783 g012
Figure 13. Personalized feedback of a poor learner.
Figure 13. Personalized feedback of a poor learner.
Sensors 21 06783 g013
Figure 14. Personalized feedback of a good learner exhibiting wavering behavior.
Figure 14. Personalized feedback of a good learner exhibiting wavering behavior.
Sensors 21 06783 g014
Table 1. Learner response for each MCQ.
Table 1. Learner response for each MCQ.
MCQCorrect OptionLearners Answered CorrectlyLearners Answered Incorrectly
MCQ1BS1, S3, S4, S5, S7, S10, S11, S12, S13, S14S2, S6, S8, S9, S15
MCQ2CS1, S3, S4, S5, S9, S12S2, S6, S7, S8, S10, S11, S13, S14, S15
MCQ3DS1, S3, S5, S9, S12, S14S2, S4, S6, S7, S8, S10, S13, S15
MCQ4DS5S1, S2, S3, S4, S6, S7, S8, S9, S10, S11, S12, S14, S15
MCQ5AS1, S3, S5, S8, S10, S12, S14S2, S4, S6, S7, S11, S14, S15
Table 2. Mean and Standard Deviation of performing and underperforming learners for %Gaze Duration on keywords.
Table 2. Mean and Standard Deviation of performing and underperforming learners for %Gaze Duration on keywords.
MCQPerforming Learners N, M(SD)Underperforming Learners N, M(SD)p-Value
MCQ110, 15.1(7.44)5, 8.6(7.68)0.141
MCQ26, 6.83(8.00)9, 10.30(4.17)0.288
MCQ36, 2.3 (2.6)9, 1.9 (2.14)0.754
MCQ41, 36.8(NA)14, 29.9(16.03)0.683
MCQ57, 4.45(4.01)8, 4.68(3.37)0.907
N = sample, M = Mean, SD = Standard Deviation.
Table 3. Mean and Standard deviation of gazing at correct option and keyword.
Table 3. Mean and Standard deviation of gazing at correct option and keyword.
MCQt Value(14)p-Value
MCQ10.403620.6926
MCQ22.95760.01
MCQ33.240.0059
MCQ4−5.9250.00003
MCQ51.90.07
Table 4. Knowledge Acquisition in learners.
Table 4. Knowledge Acquisition in learners.
Knowledge AcquisitionPerformingUnderperforming
Fully179
Partially76
None630
Table 5. Comparison between different assessment techniques.
Table 5. Comparison between different assessment techniques.
ParametersTraditional AssessmentOnline AssessmentIEyeGASE
ScoreYesYesYes
User InterfaceNoYesYes
Additional InfrastructureNoYesYes
Dedicated ResourceYesNoNo
Personalized FeedbackNoYesYes
Response TimeNoYesYes
Visualization of ResultsNoYesYes
Knowledge AcquiredNoYesYes
Wavering BehaviorNoNoYes
Confusion in mindNoNoYes
Inattentional Blindness on KeywordsNoNoYes
Engagement in TaskNoYesYes
Table 6. Summary of Deeper Insights into each learner’s performance.
Table 6. Summary of Deeper Insights into each learner’s performance.
Learner IDMCQ 1MCQ 2MCQ 3MCQ 4MCQ 5
S1KA(F)KA(F)KA(F)KA(N), WBKA(F), WB
S2KA(P), WBKA(N), IB, E(N)KA(N)KA(N)KA(F)
S3KA(N), F(Y)KA(F), WB, IBKA(N), IB, F(Y)KA(N), WBKA(F), IB
S4KA(P), WBKA(P)KA(N), IBKA(N)KA(F), WB
S5KA(F), WBKA(F)KA(N), IB, F(Y), E(N)KA(N), F(Y)KA(F)
S6KA(N), IBKA(P), E(N)KA(F), WB, IBKA(N), E(N)KA(F), WB, IB, E(N)
S7KA(P), WB, IBKA(N)KA(P), IBKA(N)KA(P)
S8KA(N)KA(F), CL(Y), E(N)KA(N), IBKA(P)KA(F), IB
S9KA(N), IB, E(N)KA(F), IB, E(N)KA(N), IB, F(Y), E(N)KA(N), IB, E(N)KA(N), IB, E(N)
S10KA(F)KA(N)KA(N), E(N)KA(N)KA(P)
S11KA(P)KA(N)KA(N), E(N), IBKA(N)KA(F), IB
S12KA(F), E(N)KA(F)KA(F)KA(N), E(N)KA(F), IB
S13KA(F), WB, E(N)KA(N), E(N)KA(F), IB, F(Y), E(N)KA(N), E(N)KA(P), CL(Y), E(N)
S14KA(P), WBKA(N), E(N)KA(N), IB, F(Y), E(N)KA(N), E(N)KA(F), IB, E(N)
S15KA(N)KA(F), CL(Y), E(N)KA(N), IBKA(N)KA(F), IB, E(N)
KA = Knowledge Acquisition (F = Fully, P = Partially, N = None), IB = Inattentional Blindness, WB = Wavering Behavior, F(Y) = Findability(Yes), E(N) = Engagement(No), CL(Y) = Low Confidence Level (Yes).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ramachandra, C.K.; Joseph, A. IEyeGASE: An Intelligent Eye Gaze-Based Assessment System for Deeper Insights into Learner Performance. Sensors 2021, 21, 6783. https://doi.org/10.3390/s21206783

AMA Style

Ramachandra CK, Joseph A. IEyeGASE: An Intelligent Eye Gaze-Based Assessment System for Deeper Insights into Learner Performance. Sensors. 2021; 21(20):6783. https://doi.org/10.3390/s21206783

Chicago/Turabian Style

Ramachandra, Chandrika Kamath, and Amudha Joseph. 2021. "IEyeGASE: An Intelligent Eye Gaze-Based Assessment System for Deeper Insights into Learner Performance" Sensors 21, no. 20: 6783. https://doi.org/10.3390/s21206783

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop