Next Article in Journal
Measurement Method of Interpupillary Distance and Pupil Height Based on Ensemble of Regression Trees and the BlendMask Algorithm
Previous Article in Journal
Novel Methodology for Scaling and Simulating Structural Behaviour for Soil–Structure Systems Subjected to Extreme Loading Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predictive Eye Movements Characterize Active, Not Passive, Participation in the Collective Embodied Learning of a Scientific Concept

1
Department of Mathematics and Computer Science, Weizmann Institute of Science, Rehovot 76100, Israel
2
Department of Brain Sciences, Weizmann Institute of Science, Rehovot 76100, Israel
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(15), 8627; https://doi.org/10.3390/app13158627
Submission received: 28 June 2023 / Revised: 19 July 2023 / Accepted: 24 July 2023 / Published: 26 July 2023

Abstract

:
Embodied pedagogy maintains that teaching and learning abstract concepts can benefit significantly from integrating bodily movements into the process. However, the dynamics of such an integration, as well as its dependency on active participation, are not known. Here, we examined the dynamics of visual perception loops during embodied training by tracking eye movements during a session of the collective embodied learning of a concept in physics—angular velocity. Embodied learning was accomplished by the subjects, forming a line that rotated around a central object, in this case, a bottle. We tracked the gaze resulting from the eye and head movements in 12 subjects, who both actively participated in the collective embodied exercise and passively watched it. The tracking data of 7 of these 12 subjects passed our tracking reliability criteria in all the trials and are reported here. During active learning, the learners tended to look ahead of the rotating line (by 35.18 ± 14.82 degrees). In contrast, while passively watching others performing the task, the learners tended to look directly at the line. Interestingly, while the learners were performing the collective exercise, they were unaware of looking ahead of the rotating line. We concluded that the closed-loop perceptual dynamics differed between the active and passive modes, and discussed possible consequences of the observed differences with respect to embodied pedagogy.

1. Introduction

Neuro-pedagogy is an interdisciplinary field that integrates the perspectives of learning sciences and neuroscience. A main discipline in this field is the study of teaching and learning strategies. Monitoring the aspects of neural processing can facilitate an understanding of the effects of different teaching environments and pedagogics. This is a central objective of the research presented in this paper. Our pedagogical approach was based on a full-body, technology-free, embodied pedagogy for teaching and learning physics, which we developed in a previous study [1,2]. Our neurobiological probing was based on tracking the motor aspect of visual perception via gaze (as determined by head and eye control) tracking [3].

1.1. Embodied Cognition

One approach to connecting experience with conceptual knowledge is embodiment—a cognitive science paradigm that sees the brain and body as intricately intertwined and inseparable entities. According to this approach, mental functions include activities throughout the body, environment, and society. Numerous studies based on the embodiment paradigm have lent support to the premise that cognition is grounded in physical reality [4,5,6,7,8].
The embodied cognition perspective supports the paradigm of learning through movement: “To say that cognition is embodied means that it arises from bodily interactions with the world. […] cognition depends on the kinds of experiences [coming] from having a body with particular perceptual and motor capabilities […] inseparably linked and [forming] the matrix within which reasoning, memory, emotion, language […] are meshed” [9]. Theorists of embodied cognition position sensorimotor activity as being embedded in virtually all reasoning and conceptual processes [10]. The main argument is that understanding is rooted in physical experience; hence, a learning environment that incorporates learner mobility to solve motor problems can support the development of cognitive abilities such as concept learning [11]. Embodied cognition has a significant impact on educational research [12,13,14], some of which is described below.

1.2. Embodied Pedagogy for Teaching and Learning Physics

Numerous studies over the years have explored students’ difficulties in learning physics. One of the most researched directions in studying students’ difficulties is based on a learner’s active participation [15,16]. Active learning can be based on computers, research, writing, peer instruction, and additional approaches in which the student does not just watch and listen. Passive learning is learning in which the learner is only listening to the teacher. In contrast, active learning is learning in which the learner is an active partner in the lesson, for example through games, discussions, sessions, discourse, and more. Active learning has a clear advantage over traditional learning in the context of conceptual understanding [17,18]. The research presented here is consistent with active learning based on the embodied pedagogy approach for teaching and learning physics, which we developed in a previous study [1,2]. This embodied pedagogy involves full-body movements without any support from a technological environment. The approach coheres with the embodiment paradigm and relies on the interrelationship between existing (intuitive) physical knowledge and novel formal knowledge.
The underlying assumption of this pedagogy is that actual physical experience can be used as a unique resource for learning complex concepts (e.g., angular velocity in physics) by associating them with daily bodily activities. This assumption is supported by frequent indications about the contribution of embodiment to learning and problem solving [8,19]. The central principle of this pedagogy is to enable learners to physically experience a physics-related phenomenon before defining and explaining it verbally—“experience first, signify later” [2,4,20].

The Angular Velocity Concept and the Collective Circular Exercise

Circular motion is a complex topic to teach and learn [21]. Students have difficulties primarily with the kinematics of circular motion [22]. In particular, students often confuse linear velocity and angular velocity [23]. Moreover, the concept of an angle is also often misconstrued [24]; many students get confused between the definition of an angle and the definition of an arc, which consequently impairs their understanding of angular velocity [20].
Angular velocity is defined as the angular displacement overtime of a body in a circular path. The mean angular velocity of bodies A and B moving from their respective Point 1 to Point 2 is the same when their motion lasts the same amount of time (Figure 1a); in this example, if the motion lasts 10 s, then the angular velocity of both bodies is six degrees per second.
The collective circular exercise is an embodied collective activity, which refers to learning this concept according to our embodied pedagogy approach for teaching and learning physics [2]. During this collective task, the instructor positions a bottle in the room and asks the learners to “walk around the bottle together as one body”. Students usually interpret the instruction as keeping the line intact as a straight line (Figure 1b). After several cycles of trial and error, the students typically stabilize on a collective method for circling the bottle together, with those farther away from the bottle walking faster. This exercise is typically followed by a verbal explanation accompanied by a specific arm gesture [2].
The contribution of this instructional approach to students’ learning is evident. For example, we tested it with high-school female students, with both dance majors and physics majors, and the learning was evident and successful [20]. Moreover, after asking the students to summarize the concept of angular velocity through creative summary works, we found that the students expressed not only a deep conceptual understanding, but also creativity and a philosophical and effective depth [2,20]. Examining the behavior of the students practicing the collective circular exercise through a microanalytical analysis implied that head direction was actively involved in the learning process [2]. The current study was designed to monitor the gaze behavior continuously throughout the exercise.

1.3. Eye Tracking in Learning

Most of our external information and perception of the world is achieved through vision. Tracking peoples’ gazes using eye-tracking technology can provide information about which elements attract people’s attention—what people focus on and what they ignore [25]. Eye-tracking has become a widely used method for analyzing user behavior in marketing, neuroscience, human–computer interaction, and visualization research [3]. Educational science is one of the fields of applied eye-tracking research [26]. Some specific applications of human–computer interaction systems include studying visual attention [27], problem solving and executive function [28], syntax comprehension [29], hand-gesture-based cursor control [30], and eye–hand coordination [31].
Students’ behavior has been studied extensively using various eye-tracking measurements in individual learning across various disciplines [32,33]. These studies have included: eye tracking in physics education [34], learning from graphics [35], multimedia learning processes [36], and learning from a scientific text [37].
In contrast, the study of learning in collaborative processes by investigating learners’ gazes is rather uncommon, with a few exceptions using computer-supported collaborative learning (CSCL) [38,39,40]. Most studies use eye tracking while the student is sitting or standing, such as this example from mathematics education research [41,42]. In some studies, students move their head or arms [43,44,45]. We present for the first time, to the best of our knowledge, the use of eye tracking for investigating individual learners participating in collaborative learning through a collective embodied exercise with full-body movement.

1.4. Be a Passive or Active Learner in Embodied Learning Environment

The pedagogical differences between active embodied learning and passive embodied learning (e.g., when observing active learners) are not yet clear [11] When collective embodied exercises are used in a whole-class setting, there may not be enough time for every student to physically engage with the learning environment. Therefore, an important question to consider is whether students who are situated outside the embodied environment as observers will learn as well as active learners. It should be noted that even supposedly passive learners are not physically passive, since they constantly move their eyes and perhaps additional body parts while observing an exercise. In this paper, a passive learner is one who observes the active embodied learners.
The importance of actively performing embodied exercises, rather than just watching others perform them, has been demonstrated, for example, in studies where students describe mathematical graphs using body gestures [46]. On the other hand, there are studies that have suggested that observing someone else’s actions may lead to a conceptual understanding akin to performing the action oneself [47]. Valuable passive embodied learning is assumed to be based on the activity of “mirror neurons”, neurons that can potentially translate observed movements into representations of self-movements [47,48]. In this study, we have tested, among other things, whether the eye movements of passive and active learners behave similarly.

1.5. Predictive Behavior and Looking Ahead

Predictive behavior is ubiquitous in life and neural systems [49]. Anticipatory control is observed, for example, in diving birds that retract their wings before plunging into water [50]; catching a ball requires both predictive and reflexive (feedback) mechanisms [51]. According to Nijhawan and Wu, in order to sense and interact with moving objects, for example, the visual system must predict the future position of the object to compensate for delays [52]. Predictive responses are also found in single cells [53].
Rats tend to “look ahead” with their whiskers before turning [54,55,56]. It is known that people tend to look ahead of a rotating clock arm [57,58]. Eye movements often precede hand movements in natural tasks [59]. In addition, and relevant to our current study, studies have shown that, when one person is required to follow the actions of someone else, the observer anticipates the other’s movements, preceding them with their gaze [60,61]. Predictive behavior is often associated with closed-loop dynamics [49,54].

1.6. Closed-Loop Perception (CLP)

Brains contain closed loops at all levels, many of which have been implicated in perceptual processing [62]. CLP maintains that perception emerges from continuous circular processes in such loops, processes in which the brain moves its sensory organs in a manner that matches the sensory inputs resulting from these movements; the relationships between the motor and sensory components of this perception represent external objects or events [63]. Perception, then, according to CLP, is a closed-loop process in which information flows between the environment, sense organs, and the brain in a continuous loop, with no starting or ending point. Motor-sensory loops strive to reach a steady state, in which the motor behavior predicts the sensory input, and the sensory input predicts the motor behavior. In these perceptual processes, the visual system is expected to control specific motor variables, depending on the task and conditions [64,65].

1.7. The Objective and Novelty of the Current Study

The main objective of the current study was to probe the dynamics of visual perception via the recording of eye movements during active and passive participation in embodied pedagogy. In particular, we aimed at quantitatively characterizing these dynamics and identifying the differences between active and passive participation in order to facilitate the design of effective pedagogical methods. The novelty of our approach was the application of real-time, head-free eye tracking in a model-based manner, in order to obtain insights into the mechanisms of embodied pedagogical training under the framework of closed-loop perception.

2. Materials and Methods

2.1. Participants

Thirty healthy, middle-school physics teachers with normal vision (24 females and 6 males, 30–60 years old) participated in the study as a part of physics teacher training held in a studio class. All the subjects signed a health declaration in the context of their eye health and vision. We preferred to start with teachers in order to maximize abeyance and minimize attention distractions. We selected the teachers on the basis of their lack of familiarity with the learned concept (post hoc, we discovered that two of them were familiar with the concept). In total, 12 of the subjects (10 females and 2 males) had their eye movements tracked during the study. Only 1 subject from the 12 was familiar with the concept being learned (angular velocity). All the subjects provided written consent and received payment for their participation.
All 12 subjects wore eye trackers in all the experimental tasks (Pupil Core eye tracker from Pupil Labs, Berlin). We used the remote recording app (Pupil Mobile) and offline eye-tracking analysis software (Pupil Player) supplied by Pupil Labs, using the manufacturer’s default confidence threshold. Subjects (n = 5) whose confidence level was above threshold for <25% of the recording time were excluded from the analysis; the analysis of eye movements thereafter was based on the remaining 7 subjects.

2.2. The Experimental Tasks and Design

Each subject participated in 4 trials: 2 active trials (trial 1 and trial 3) and 2 passive trials (trial 2 and trial 4), with each trial lasting ~30 s. An active trial was when the subjects performed the collective circular exercise, and a passive trial was when the subjects just watched the other subjects performing the collective circular exercise. Before the first trial, each subject answered a written questionnaire related to the concept of angular velocity:
“In the illustration, in front of you, there are 12 equal cuts, every 30 degrees. Three points (1,2,3) are marked in the illustration. Each of the points moves in its own circle at a constant rate, but they all return to the place from which they began to move after 24 s.
Applsci 13 08627 i001
  • Draw the trajectory of each of the points on the diagram.
  • Mark where point 1 is after passing 3 sections. Mark the place of points 2 and 3 at this time.
  • Which points in the figure have the same linear velocity?
  • Which points in the figure have the same angular velocity?”
Between trials 1 and 2, the instructor gave the subjects a brief (~5-min) verbal explanation of the concept of angular velocity, accompanied by a demonstration using arm gestures (with the elbow as a pivot and the rotation of the forearm). All the procedures met the Weizmann Institute of Science ethical standards and were approved by the Weizmann Institute’s Institutional Review Board (IRB).
The collective circular exercise: The subjects were asked to walk around a bottle together with the other subjects while keeping the line intact (Figure 1). Six groups of five subjects were trained on the collective circular exercise described above. In total, 12 of the 30 subjects who stood at one of the two edges of the moving line during the exercise (Figure 1b) were the subjects in the experiment, and we tracked their eye movements. Only two subjects were recorded in each session due to a technical limitation—only two eye trackers were available simultaneously. The subjects were asked to provide verbal reports after the experiments, stating where they were looking during the experiment.

2.3. Materials

We used eye trackers that were connected to a cell phone attached to the subject’s clothing, enabling easy movement in the space. There are three cameras on this eye tracker device: one camera for each eye at a sampling rate of 200 Hz (200 frames per second) and one HD world camera at a sampling rate of 30 Hz (30 frames per second), which captured the subject’s field of view (Figure 2).

2.4. Data Analysis

2.4.1. Eye Movement Tracking Setup

Figure 3 shows a snapshot from the Pupil Player software, as viewed by a subject standing near the bottle (who cannot be seen in the image). The green dot surrounded by a yellow circle marks this subject’s gaze while participating in the collective circular exercise. The curved blue and yellow arrows that are superimposed on the image indicate the directions of the subjects’ movement, demonstrating the larger linear distances covered by the farther participant, who was one of the two subjects in the activity.

2.4.2. Human Observers Classification

For each subject’s recording, in each frame, 2 human observers classified the position of the gaze relative to the line of walkers and whether it was (1) on the line, (2) ahead of the line, or (3) behind the line. Observer 1 is the first author of the paper and observer 2 was unfamiliar with the study. The video recording of the experiment was analyzed using the Pupil Player software.

2.4.3. Azimuthal Angle Analysis

(i)
Pupil Player Software Azimuthal Angle:
For each subject’s recording, 2 human researchers conducted a semi-automatic analysis of the recording to estimate the azimuthal angle of the gaze from the line, as seen in Figure 3a. This analysis was conducted with the Pupil Player software, using the validation function. The two researchers chose frames where the subject’s gaze was clearly detected and part of the participant’s line was visible. At each appropriate frame, they manually marked a point on the line that seemed the closest to the subject’s gaze (see the yellow dot on the line, in Figure 3a above). Based on the chosen point on the line and the location of the gaze, an estimate of the angular distance was made. In order to measure the inter-observer reliability, we measured the intra-class correlation (ICC) of the two measurements, and received r_ICC = 0.68 for the active subjects, r_ICC = 0.58 for the passive, and r_ICC = 0.95 for the combined data set (active and passive together). The inter-observer reliability values fell within the range of the inter-observer reliabilities previously found in our lab [66] and elsewhere [67]. The high level of inter-observer correlation for the combined data set reflects the fact that the noise coming from the difference between the observers was small compared to the signal we reported. In fact, the square root of the mean squared differences (MSE) between the observers (9.9%) was at least 7 times smaller than the mean differences between their reported observations (>70%, see Section 3.3).
(ii)
Manual Measurement of Azimuthal Angle:
To test the reliability of our measurements of the angular distance, we selected certain video images of situations that could be physically reconstructed in the studio class. Below, in Figure 3b, we can see 1 instance out of 12 in which we physically reconstructed the finding of the angle. Using information from the video frames, we reconstructed the location of the center, the gaze, and the line. We placed a bottle in the room and two other objects that symbolized the gaze and the line and, with the help of a protractor, we physically measured the angle. To measure the reliability of our semi-automatic measurements (of one observer) using these manual angular measurements, we measured the ICC of the two measurements, and obtained r_ICC = 0.87.

2.5. Statistical Analysis

Descriptive statistics were calculated to characterize the gaze behavior in the active and passive embodied collective exercises. A t-test was used to test whether the gaze was on the line for the majority of the time, as indicated by the subjects, and to compare the time spent gazing ahead of the line between the active and passive subjects. A Kolmogorov–Smirnov test for the normality of our data set showed that it did not differ significantly from that which was normally distributed (D-statistic = 0.23, p = 0.77).

3. Results

3.1. Questionnaires

The subjects received questionnaires before and after the collective circular exercise (see Section 2). Ten of the subjects were not familiar with the concept of angular velocity before the experiment and their answers were analyzed. In sections A and B, no changes were found before and after the collective circular exercise and the brief (~5-min) verbal explanation of angular velocity, accompanied by a demonstration using arm gestures. These were sections that did not relate to the concept of angular velocity. In section C, which referred to linear velocity, four out of ten subjects answered correctly before the collective circular exercise and explanation, and ten out of ten answered correctly after them. In section D, which referred to the concept of angular velocity, all ten subjects answered incorrectly before the collective circular exercise and explanation, and correctly afterward.

3.2. Eye Movements during Active and Passive Participation in the Collective Circular Exercise

We measured the subjects’ gazes during the collective circular exercise. Two observers scanned all the detectable frames (see Section 2.4.2) and categorized each of these frames as: (1) looking at the line, (2) looking ahead of the line, or (3) looking behind the line. Figure 4 shows two snapshots from one subject’s world camera recording while he was performing the activity. The subject stood near the bottle and is not visible in the figure. The green dot represents his gaze in this frame. Figure 4a on the left is an example of what was most often observed during the active trials. The subject’s gaze was ahead of the line, preceding the movement of the line. On the right, Figure 4b is an example of what was less often observed during the active trials. The subject’s gaze was on the line of subjects. This behavior stood in contrast to the subjects’ verbal reports, stating that they were mostly looking at the other subjects in the line.
We measured the subjects’ gazes during the passive trials, i.e., while they were watching the other subjects performing the collective circular exercise. Throughout the activity, there were two passive observers standing in front of both edges of the walking line (before it started moving). One of the observers stood in front of the subject at the end of the moving line and one stood in front of the bottle. The exact instructions that were given to each observer were: “please stand and watch the group performing the task”. In Figure 5, we see four snapshots from the world camera recording of the passive subject who was in front of the bottle, while he was watching the collective circular exercise. The green dot represents his gaze in this frame. The two figures on the left, (a) and (b), are examples of the cases most often observed, where the subject looked at the line of subjects in the exercise. The two figures on the right, (c) and (d), are examples of the cases less often observed, where the subject looked ahead of the line or behind the line of subjects, respectively.
The distribution of the gaze directions differed between the active and passive trials (Figure 6). While the subjects looked ahead of the line 90 ± 2% of the time during the active trials, they did so only 19 ± 3% of the time during the passive trials (p < 0.001, two-tailed t-test; two-tailed independent t-val = 13.9, df = 12; data averaged across the two observers; n = seven subjects). To test the probability that this difference could happen accidentally, we tested it against a null hypothesis that the gaze was directed at the line 50% of the time. This null hypothesis was rejected (p < 0.001, two-tailed t-test; two-tailed independent t-val = 27.7, df = 6; data averaged across the two observers; n = seven subjects).

3.3. Dynamics of Gaze during Active Trials

We quantified the frame-by-frame angle between the gaze and the subjects’ line during the active trials.

Looking Ahead during the Collective Circular Exercise

Each plot in Figure 7 shows the angle between a single subject’s gaze and the subject line in each frame during the entire trial (see Section 2). Two trials are shown for each subject (trial 1 in green dots and trial 3 in blue dots).
The four graphs at the far left of Figure 7 refer to the subjects who stood near the bottle at the center of the circle, and the three graphs at the right of Figure 7 refer to the subjects who stood at the end of the walking line. The subject depicted in the bottom right (light colors) was the only subject in this group who was familiar with the concept of angular velocity before the experiment. We can see from the graphs that the activity lasted about 20–30 s, where most of the points are positive in the vertical axis. Positive dots show that the subject looked ahead of the moving line in the collective circular exercise. Points on the horizontal axis represent a situation where the subject looked at the line, so the measured azimuthal angle is zero. The lower right graph is in a different color from the others, because it is the only one referring to a subject who, in retrospect, turned out to be familiar with the concept of angular velocity before the activity.
No systematic difference was found between the patterns of eye movements before (Figure 7, green) and after (Figure 7, blue) the explanation of the concept of angular velocity. Specifically, we compared three gaze-dependent variables: the mean of the percentage of time the gaze was ahead of the line was 16 ± 13% before the intervention and 22 ± 20% after it; the mean pairwise difference was 6 ± 12% (p = 0.65; paired t-test). The mean of the gaze’s angular distance from the line was 29.9 ± 4.1 degrees before the intervention and 28.5 ± 7.1 degrees after it; the mean pairwise difference was 1.5 ± 6 degrees (p = 0.83; paired t-test). The mean standard deviation of the gaze’s angular distance from the line was 15.2 ± 3.4 degrees before the intervention and 13.4 ± 2 degrees after it; the mean pairwise difference was 1.8 ± 3.9 degrees (p = 0.66; paired t-test).

4. Discussion

Tracking the visual gaze of human subjects while they were actively engaged in an embodied learning task of a physical concept (angular velocity) revealed that the subjects tended to look substantially ahead of the moving group. In contrast, these subjects did not tend to look ahead of the moving group when they were passively watching the moving group. Since the subjects were free to move their entire body, the visual gaze was determined by a superposition of the body, head, and eye movements.
Looking ahead of a moving object is not a novel finding. It is known that eye movements usually precede hand movements [68]. For example, people tend to look ahead of a rotating clock arm [57,58], and eye movements can precede hand movements in natural tasks such as mountaineering [59]. In addition, predictive responses are found even in single cells [53]. Such behavior is typically referred to as a predictive behavior [49,51,54].
Yet, the finding of looking ahead while actively performing the collective circular exercise was unexpected. First, the people who previously performed this task, including the instructor of the current study, were unaware of this phenomenon. Second, in their verbal reports, all the subjects reported looking at each other so as to synchronize their movements in the exercise, and not looking ahead. This disparity between what the subjects verbally reported looking at and what they actually looked at might be explained by the observation that people do not necessarily look at what they are attending to [69]. It could be that our subjects reported what they were attending to, and not where they were looking.
How might looking ahead be related to embodied learning? Our interpretation of looking ahead behavior corresponds with the predictive interpretation mentioned above, and takes it one step further. The predictive interpretation maintains that our motor actions are attuned to meeting the relevant sensory inputs at the right time, taking into consideration the time delays involved in the relevant sensory-motor loop. We adopt this interpretation within a general interpretation of closed-loop perception (CLP) [63]. In our general scheme, sensory and motor processes are equal partners in a circular motor-sensory process that underlies perception. Importantly, perception is not a sensory process. Rather, it is the phenomenon emerging from an ongoing circular process, in which sensory events predict motor events and motor events predict sensory events.
In this regard, it is interesting to examine the closed-loop interpretation of our finding that looking ahead occurs during active, but not passive, participation in a task. In the framework of CLP, this finding suggests that performing an active task requires interactions between different loops (the oculomotor and locomotion loops), and that such interactions are slow, while the passive task involves only one sensory-motor loop—the oculomotor one. Indeed, predictive oculomotor delays are in the order of tens of milliseconds, Indeed, predictive oculomotor delays are in the order of 50 milliseconds; during such a delay, no member of the rotating group moved more than 3 cm.
Previous studies on mathematics and physics education have shown that eye tracking is particularly beneficial for studying processes rather than outcomes [34,42]. Specifically, previous studies have indicated the importance of intersubjective sensorimotor coordination [41], a dynamic coordination between individual perception–action systems [70,71]. Our results add an additional angle to this line of studies. Assuming that each individual perceives her or his environment via an array of closed-loop motor-sensory circuits (CLP, [63]), dynamic intersubjective coordination is further assumed to form an extension of such motor-sensory loops to the intersubjective level. That is, the completion of a motor-sensory cycle in each individual now also depends on the sensory information collected about the dynamics of the group. While a theoretical framework for such group perceptual processes is still lacking, it can already be speculated that the looking ahead phenomenon observed here is a part of this process. Namely, the intersubjective coordination required for the perceptual process of a group involves the inter-subject predictive control of each individual’s gaze.
These results are also relevant to the collaboration quality of collaborative learning groups [40,43]. Assessing the collaboration quality in remotely-linked student dyads, for example, demonstrated the importance of mediating technologies, such as eye tracking, for supporting joint attention in collaborative learning activities [40]. In our study, joint attention was probably obtained not by attending to external objects, but rather by attending to the common movement pattern of the group, as indicated by the consistent looking ahead behavior. As attending to the common movement of the group requires an assessment, or perception, of the angular velocity, this joint attention might play a role in the collaborative learning of the angular velocity concept. This joint attention is embodied—it is expressed by eye movements. Naturally, it must also involve neuronal representations that are included in the motor-sensory loop(s) controlling eye (and head) movements [58]. It can be said that the dynamics of the entire motor-sensory loop, including the gaze behavior, form the embodied representation of the concept of angular velocity in this task. The different embodiment pattern observed here between the active and passive participants may provide a clue for explaining the previously reported advantages of active versus passive participation in learning mental tasks [71,72,73,74].
The gaze behavior observed in our passive trials was contrary to the studies in the literature that have shown that, when one person is required to follow the actions of someone else, the observer anticipates the other’s movements, preceding them with their gaze [60,61]. This difference might be explained by the fact that our passive observers were required to follow the actions of a group rather than of an individual, and this might have affected their predictive behavior, possibly due to the reduced level of action mirroring. In any case, whatever the reason was, the striking difference between the active and passive trials in our group task calls for the attention of educators when conducting collective embodied learning tasks—active and passive participation may lead to different results. This recommendation can be reconciled with the literature, which indicates an advantage for active learning over passive learning [46,75].
According to the closed-loop perceptual scheme, the angle between the line of the subjects (a sensory variable) and each participant’s gaze (a motor variable) was a dynamic variable, whose behavior reflected the perceptual process. In the case of embodied learning, as was the case here, the behavior of this variable may also reflect the learning process. For example, our subjects’ looking patterns may have been affected by their attempts to estimate the distance that each of the subjects had to pass in a given time in order to keep the line intact.
If indeed sensory-motor behavior played a role in such predictive calculations, then the fact that this behavior was typical only for the active trials (and not for the passive trials) suggests a principal advantage of the embodied learning process [76,77]. Namely, when one is passive, their gaze is directed at where things are; however, when one is active, their gaze is directed at where things would be, had their assessment of the world been correct.
The current study focused on the analysis of looking behavior during embodied teaching. Looking behavior is only one component of the behavior underlying visual perception, visual perception is only one component of sensory perception, and sensory perception is only one component in the process underlying embodied learning. In our view, the primary importance of our results is not in the actual measured behavior of the gazes, but in identifying the existence of the dynamic, perception-related processes that differ between passive and active learning, which are likely involved in the embodied learning process.
How exactly these processes are related remains to be studied in future studies. Such future studies should attempt to overcome the limitations of the current study. Primarily, the current study did not allow for an investigation of the possible contribution of the eye movements to the learning of the concept. Additional limitations of the current study include: an insufficient statistical power for characterizing the differences in the eye movement dynamics between the different conditions and an insufficient power for identifying the dynamical patterns of convergence or oscillations within each trial.
Overall, our findings on the active predictive involvement of eye movements in an embodied pedagogical task of science teaching reveal a tool for probing the dynamics of the latent closed-loop sensory-motor processes accompanying overt training processes. The specific predictive metrics revealed here, as well as the quantitative differences between the active and passive participation, will likely be helpful in constructing testable, closed-loop models for embodied pedagogy in the future.

Author Contributions

Conceptualization, R.Z. and E.A.; methodology and execution, O.K., T.B.-J. and R.Z.; formal analysis, O.K. and R.Z.; writing—original draft preparation, R.Z.; writing—review and editing, R.Z., O.K. and E.A.; visualization, R.Z.; supervision, E.A.; funding acquisition, E.A. All authors have read and agreed to the published version of the manuscript.

Funding

The European Research Council (ERC) under the EU Horizon 2020 Research and Innovation Programme (grant agreement No 786949). The Neuro-wellness academic research grants program funded by the JOY Fund (grant agreement No 713641).

Institutional Review Board Statement

All procedures met the Weizmann Institute of Science ethical standards and were approved by the Weizmann Institute’s Institutional Review Board (IRB; approval number: 733-2).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

Special thanks to the group of Israeli academic mothers “Mothers researching alone together” (in Israel and the USA) for the genuine support in the writing process.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Zohar, R.; Bagno, E.; Eylon, B. Dance and Movement as Means to Promote Physics Learning. In Proceedings of the 7th International Conference on Education and New Learning Technologies, Barcelona, Spain, 6–8 July 2015; EDULEARN15. pp. 6881–6885. [Google Scholar]
  2. Zohar, R.; Bagno, E.; Eylon, B.; Abrahamson, D. Motor skills, creativity, and cognition in learning physics concepts. Brain Body Cogn. 2017, 7, 67–76. [Google Scholar]
  3. Duchowski, A.T. A breadth-first survey of eye-tracking applications. Behav. Res. Meth. Instrum. Comput. 2002, 34, 455–470. [Google Scholar] [CrossRef]
  4. Abrahamson, D.; Lindgren, R. Embodiment and embodied design. In The Cambridge Handbook of Learning Sciences, 2nd ed.; Sawyer, R.K., Ed.; Cambridge University Press: Cambridge, UK, 2014; pp. 358–376. [Google Scholar]
  5. Hutto, D.D.; Kirchhoff, M.D.; Abrahamson, D. The enactive roots of STEM: Rethinking education design in mathematics. Educ. Psychol. Rev. 2015, 27, 371–389. [Google Scholar]
  6. Solomon, K.O.; Barsalou, L.W. Representing properties locally. Cogn. Psychol. 2001, 43, 129–169. [Google Scholar]
  7. Núñez, R.E.; Edwards, L.D.; Matos, J.F. Embodied cognition as grounding for situatedness and context in mathematics education. Educ. Stud. Math. 1999, 39, 45–65. [Google Scholar]
  8. Lindgren, R.; Johnson-Glenberg, M. Emboldened by embodiment: Six precepts for research on embodied learning and mixed reality. Educ. Res. 2013, 42, 445–452. [Google Scholar] [CrossRef]
  9. Thelen, E.; Schöner, G.; Scheier, C.; Smith, L.B. The dynamics of embodiment: A field theory of infant perseverative reaching. Behav. Brain Sci. 2001, 24, 1–34. [Google Scholar]
  10. Anderson, M.L.; Richardson, M.J.; Chemero, A. Eroding the boundaries of cognition: Implications of embodiment1. Top. Cogn. Sci. 2012, 4, 717–730. [Google Scholar]
  11. Abrahamson, D.; Sánchez-García, R. Learning is moving in new ways: The ecological dynamics of mathematics education. J. Learn. Sci. 2016, 25, 203–239. [Google Scholar]
  12. Shapiro, L.; Stolz, S.A. Embodied cognition and its significance for education. Theory Res. Educ. 2019, 17, 19–39. [Google Scholar] [CrossRef]
  13. Abrahamson, D.; Nathan, M.J.; Williams-Pierce, C.; Walkington, C.; Ottmar, E.R.; Soto, H.; Alibali, M.W. The future of embodied design for mathematics teaching and learning. Front. Educ. 2020, 5, 147. [Google Scholar] [CrossRef]
  14. Scherr, R.E.; Close, H.G.; Close, E.W.; Flood, V.J.; McKagan, S.B.; Robertson, A.D.; Vokos, S. Negotiating energy dynamics through embodied action in a materially structured environment. Phys. Rev. Spec. Top.-Phys. Educ. Res. 2013, 9, 020105. [Google Scholar]
  15. Thornton, R.K.; Sokoloff, D.R. Assessing student learning of Newton’s laws: The Force and Motion Conceptual Evaluation and the evaluation of active learning laboratory and lecture curricula. Am. J. Phys. 1998, 66, 338–352. [Google Scholar]
  16. Meltzer, D.E.; Thornton, R.K. Resource letter ALIP–1: Active-learning instruction in physics. Am. J. Phys. 2012, 80, 478–496. [Google Scholar]
  17. Hake, R.R. Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses. Am. J. Phys. 1998, 66, 64–74. [Google Scholar]
  18. Laws, P.W. Calculus-based physics without lectures. Phys. Today 1991, 44, 24–31. [Google Scholar]
  19. Smith, J.; DiSessa, A. Misconceptions reconceived: A constructivist analysis of knowledge in transition. J. Learn. Sci. 1993, 3, 115–163. [Google Scholar]
  20. Zohar, R. Movements as a Door for Learning Physics Concepts, Integrating Embodied Pedagogy in Teaching. Ph.D. Thesis, The Weizmann Institute of Science, Rehovot, Israel, 2018. [Google Scholar] [CrossRef]
  21. McDermott, L.C.; Redish, E.F. Resource letter: PER-1: Physics education research. Am. J. Phys. 1999, 67, 755–767. [Google Scholar]
  22. Dianningrum, M.C.; Sutopo; Hidayat, A. Students’ understanding of circular motion with multi representational approach. Int. J. Sci. Technol. Res. 2016, 5, 25–33. [Google Scholar]
  23. Mashood, K.K.; Singh, V.A. Rotational kinematics of rigid body about a fixed axis: Development and analysis of an inventory. Eur. J. Phys. 2015, 36, 45020. [Google Scholar]
  24. Ozen Unal, D.; Urun, O. Sixth grade students’ some difficulties and misconceptions on angle concept. Qual. Res. Educ. 2021, 27, 125–154. [Google Scholar] [CrossRef]
  25. Zhaowei, L.; Peiyuan, G.; Chen, S. A review of main eye movement tracking methods. J. Phys. Conf. Ser. 2021, 1802, 042066. [Google Scholar]
  26. Jarodzka, H.; Holmqvist, K.; Gruber, H. Eye tracking in educational science: Theoretical frameworks and research agendas. J. Eye Mov. Res. 2017, 10, 1–18. [Google Scholar]
  27. Costescu, C.; Rosan, A.; Brigitta, N.; Hathazi, A.; Kovari, A.; Katona, J.; Demeter, R.; Heldal, I.; Helgesen, C.; Thill, S.; et al. Assessing visual attention in children using gp3 eye tracker. In Proceedings of the 2019 10th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Naples, Italy, 23–25 October 2019; IEEE: New York, NY, USA, 2019; pp. 343–348. [Google Scholar]
  28. Kovari, A. Study of algorithmic problem-solving and executive function. Acta Polytech. Hung. 2020, 17, 241–256. [Google Scholar]
  29. Katona, J.; Kovari, A.; Heldal, I.; Costescu, C.; Rosan, A.; Demeter, R.; Thill, S.; Stefanut, T. Using eye-tracking to examine query syntax and method syntax comprehension in LINQ. In Proceedings of the 2020 11th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Mariehamn, Finland, 23–25 September 2020; IEEE: New York, NY, USA, 2020; pp. 000437–000444. [Google Scholar]
  30. Sziladi, G.; Ujbanyi, T.; Katona, J.; Kovari, A. The analysis of hand gesture based cursor position control during solve an IT related task. In Proceedings of the 2017 8th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Debrecen, Hungary, 11–14 September 2017; IEEE: New York, NY, USA, 2017; pp. 000413–000418. [Google Scholar]
  31. Kovari, A.; Katona, J.; Costescu, C. Quantitative analysis of relationship between visual attention and eye-hand coordination. Acta Polytech. Hung. 2020, 17, 77–95. [Google Scholar]
  32. Tsai, M.J.; Hou, H.T.; Lai, M.L.; Liu, W.Y.; Yang, F.Y. Visual attention for solving multiple-choice science problem: An eye-tracking analysis. Comput Educ. 2012, 58, 375–385. [Google Scholar]
  33. Lai, M.-L.; Tsai, M.-J.; Yang, F.-Y.; Hsu, C.-Y.; Liu, T.-C.; Lee, S.W.-Y.; Lee, M.-H.; Chiou, G.-L.; Liang, J.-C.; Tsai, C.-C. A review of using eye-tracking technology in exploring learning from 2000 to 2012. Educ. Res. Rev. 2013, 10, 90–115. [Google Scholar]
  34. Hahn, L.; Klein, P. Eye tracking in physics education research: A systematic literature review. Phys. Rev. Phys. Educ. Res. 2022, 18, 013102. [Google Scholar]
  35. Mayer, R.E. Unique contributions of eye-tracking research to the study of learning with graphics. Learn. Instr. 2010, 20, 167–171. [Google Scholar]
  36. Van Gog, T.; Scheiter, K. Eye tracking as a tool to study and enhance multimedia learning. Learn. Instr. 2010, 20, 95–99. [Google Scholar]
  37. Ariasi, N.; Mason, L. Uncovering the effect of text structure in learning from a science text: An eye-tracking study. Instr. Sci. 2011, 39, 581–601. [Google Scholar] [CrossRef]
  38. Sharma, K.; Jermann, P.; Dillenbourg, P.; Prieto, L.P.; D’Angelo, S.; Gergle, D.; Schneider, B.; Rau, M.; Pardos, Z.; Rummel, N. CSCL and Eye-Tracking: Experiences, Opportunities and Challenges. In Making a Difference: Prioritizing Equity and Access in CSCL, Proceedings of the 12th International Conference on Computer Supported Collaborative Learning (CSCL), Philadelphia, PA, USA, 18–22 June 2017; Smith, B.K., Borge, M., Mercier, E., Lim, K.Y., Eds.; International Society of the Learning Sciences: Philadelphia, PA, USA, 2017. [Google Scholar]
  39. Becker, S.; Mukhametov, S.; Pawels, P.; Kuhn, J. Using mobile eye tracking to capture joint visual attention in collaborative experimentation. In Proceedings of the Physics Education Research Conference 2021, PERC, Virtual Event, 22–23 November 2021; pp. 39–44. [Google Scholar]
  40. Schneider, B.; Pea, R. Real-Time Mutual Gaze Perception Enhances Collaborative Learning and Collaboration Quality. In Educational Media and Technology Yearbook; Orey, M., Branch, R., Eds.; Springer: Cham, Switzerland, 2017; Volume 40. [Google Scholar] [CrossRef]
  41. Shvarts, A.; Abrahamson, D. Dual-eye-tracking Vygotsky: A microgenetic account of a teaching/learning collaboration in an embodied-interaction technological tutorial for mathematics. Learn. Cult. Soc. Interact. 2019, 22, 100316. [Google Scholar] [CrossRef]
  42. Strohmaier, A.R.; MacKay, K.J.; Obersteiner, A.; Reiss, K.M. Eye-tracking methodology in mathematics education research: A systematic literature review. Educ. Stud. Math. 2020, 104, 147–200. [Google Scholar] [CrossRef]
  43. Walkington, C.; Chelule, G.; Woods, D.; Nathan, M.J. Collaborative gesture as a case of extended mathematical cognition. J. Math. Behav. 2019, 55, 100683. [Google Scholar] [CrossRef]
  44. Werner, K.; Raab, M. Moving your eyes to solution: Effects of movements on the perception of a problem-solving task. Q. J. Exp. Psychol. 2014, 67, 1571–1578. [Google Scholar] [CrossRef]
  45. Gerofsky, S. Seeing the graph vs. being the graph: Gesture, engagement and awareness in school mathematics. In Integrating Gestures; Stam, G., Ishino, M., Eds.; John Benjamins: Amsterdam, The Netherlands, 2011; pp. 245–256. [Google Scholar]
  46. King, B.; Smith, C.P. Mixed-reality learning environments: What happens when you move from a laboratory to a classroom. Int. J. Res. Educ. Sci. IJRES 2018, 4, 577–594. [Google Scholar] [CrossRef]
  47. Rizzolatti, G.; Sinigaglia, C. Mirrors in the Brain: How Our Minds Share Actions and Emotions, 1st ed.; Oxford University Press: Oxford, UK, 2008. [Google Scholar]
  48. Hickok, G. Do mirror neurons subserve action understanding. Neurosci. Lett. 2013, 540, 56–58. [Google Scholar] [PubMed] [Green Version]
  49. Friston, K. The free-energy principle: A unified brain theory. Nat. Rev. Neurosci. 2010, 11, 127–138. [Google Scholar]
  50. Lee, D.N.; Reddish, P.E. Plummeting gannets: A paradigm of ecological optics. Nature 1981, 29, 293–294. [Google Scholar]
  51. Ghez, C.; Krakauer, J. The organization of movement. In Principles of Neural Science; Kandel, E.R., Schwartz, J.H., Jessell, T.M., Eds.; McGraw Hill: New York, NY, USA, 2000. [Google Scholar]
  52. Nijhawan, R.; Wu, S. Compensating time delays with neural predictions: Are predictions sensory or motor. Philos. Transact. R. Soc. A 2009, 367, 1063–1078. [Google Scholar]
  53. Duhamel, J.R.; Colby, C.L.; Goldberg, M.E. The updating of the representation of visual space in parietal cortex by intended eye movements. Science 1992, 255, 90–92. [Google Scholar] [CrossRef]
  54. Wallach, A.; Deutsch, D.; Oram, T.B.; Ahissar, E. Predictive whisker kinematics reveal context-dependent sensorimotor strategies. PLoS Biol. 2020, 18, e3000571. [Google Scholar] [CrossRef]
  55. Towal, R.B.; Hartmann, M.J. Right-left asymmetries in the whisking behavior of rats anticipate head movements. J. Neurosci. 2006, 26, 8838–8846. [Google Scholar] [PubMed] [Green Version]
  56. Mitchinson, B.; Martin, C.J.; Grant, R.A.; Prescott, T.J. Feedback control in active sensing: Rat exploratory whisking is modulated by environmental contact. Proc. Biol. Sci. 2007, 274, 1035–1041. [Google Scholar] [PubMed]
  57. Nijhawan, R. Visual prediction: Psychophysics and neurophysiology of compensation for time delays. Behav. Brain Sci. 2008, 31, 179–198; discussion 198–239. [Google Scholar] [PubMed] [Green Version]
  58. Land, M.F. Eye movements and the control of actions in everyday life. Prog. Retin. Eye Res. 2006, 25, 296–324. [Google Scholar] [PubMed]
  59. Pelz, J.; Hayhoe, M.; Loeber, R. The coordination of eye, head, and hand movements in a natural task. Exp. Brain Res. 2001, 139, 266–277. [Google Scholar]
  60. Flanagan, J.R.; Johansson, R.S. Action plans used in action observation. Nature 2003, 424, 769–771. [Google Scholar]
  61. Gredebäck, G.; Falck-Ytter, T. Eye movements during action observation. Perspect. Psychol. Sci. 2015, 10, 591–598. [Google Scholar]
  62. Ahissar, E.; Kleinfeld, D. Closed-loop neuronal computations: Focus on vibrissa somatosensation in rat. Cereb. Cortex 2003, 13, 53–62. [Google Scholar] [CrossRef] [Green Version]
  63. Ahissar, E.; Assa, E. Perception as a closed-loop convergence process. Elife 2016, 5, e12830. [Google Scholar]
  64. Gruber, L.Z.; Ahissar, E. Closed loop motor-sensory dynamics in human vision. PLoS ONE 2020, 15, e0240660. [Google Scholar]
  65. Gruber, L.Z.; Ullman, S.; Ahissar, E. Oculo-retinal dynamics can explain the perception of minimal recognizable configurations. Proc. Natl. Acad. Sci. USA 2021, 118, e2022792118. [Google Scholar] [PubMed]
  66. Mizrachi, N.; Nelinger, G.; Ahissar, E.; Arieli, A. Idiosyncratic selection of active touch for shape perception. Sci. Rep. 2022, 12, 2922. [Google Scholar]
  67. Koo, T.K.; Li, M.Y. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J. Chiropr. Med. 2016, 15, 155–163. [Google Scholar] [PubMed] [Green Version]
  68. Land, M.F.; Hayhoe, M. In what ways do eye movements contribute to everyday activities? Vis. Res. 2001, 41, 3559–3565. [Google Scholar] [PubMed] [Green Version]
  69. Posner, M.I. Attention in cognitive neuroscience: An overview. In The Cognitive Neurosciences; Gazzaniga, M.S., Ed.; The MIT Press: Cambridge, MA, USA, 1995; pp. 615–624. [Google Scholar]
  70. Mathayas, N.; Brown, D.E.; Wallon, R.C.; Lindgren, R. Representational gesturing as an epistemic tool for the development of mechanistic explanatory models. Sci. Educ. 2019, 103, 1047–1079. [Google Scholar] [CrossRef]
  71. Nathan, M.J.; Walkington, C. Grounded and embodied mathematical cognition: Promoting mathematical insight and proof using action and language [journal article]. Cogn. Res. Princ. Implic. 2017, 2, 9. [Google Scholar] [CrossRef] [Green Version]
  72. Brooks, N.B.; Barner, D.; Frank, M.; Goldin-Meadow, S. The role of gesture in supporting mental representations: The case of mental abacus arithmetic. Cogn. Sci. 2018, 42, 554–575. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  73. Pouw, W.T.J.L.; van Gog, T.; Paas, F. An embedded and embodied cognition review of instructional manipulatives. Educ. Psychol. Rev. 2014, 26, 51–72. [Google Scholar] [CrossRef]
  74. Goldin-Meadow, S.; Levine, S.C.; Zinchenko, E.; Yip, T.K.; Hemani, N.; Factor, L. Doing gesture promotes learning a mental transformation task better than seeing gesture. Dev. Sci. 2012, 15, 876–884. [Google Scholar] [PubMed] [Green Version]
  75. Jang, S.; Vitale, J.M.; Jyung, R.W.; Black, J.B. Direct manipulation is better than passive viewing for learning anatomy in a three-dimensional virtual reality environment. Comput. Educ. 2017, 106, 150–165. [Google Scholar]
  76. Prinz, W. Perception and action planning. Eur. J. Cogn. Psychol. 1997, 9, 129–154. [Google Scholar]
  77. Gallese, V.; Lakoff, G. The brain’s concepts: The role of the sensory-motor system in conceptual knowledge. Cogn. Neuropsychol. 2005, 22, 455–479. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Illustration of the concept of angular velocity and the embodied collective activity related to its teaching. (a) Angular velocity is defined as the angular displacement over time of a body in a circular path. Given that the blue arrow represents the movement over 10 s, the angular velocity of points A and B, while moving from their respective point 1 to point 2, is 6 degrees/second. (b) An illustration depicting the embodied collective activity (collective circular exercise), which refers to learning the concept of angular velocity according to our embodied pedagogy approach for teaching and learning physics. The learners are asked to walk together around a fixed point—a bottle. Using trial and error, they negotiate, verbally and using gestures, a collective method to circle the bottle together. They move to keep the line intact with those farther away from the bottle walking faster.
Figure 1. Illustration of the concept of angular velocity and the embodied collective activity related to its teaching. (a) Angular velocity is defined as the angular displacement over time of a body in a circular path. Given that the blue arrow represents the movement over 10 s, the angular velocity of points A and B, while moving from their respective point 1 to point 2, is 6 degrees/second. (b) An illustration depicting the embodied collective activity (collective circular exercise), which refers to learning the concept of angular velocity according to our embodied pedagogy approach for teaching and learning physics. The learners are asked to walk together around a fixed point—a bottle. Using trial and error, they negotiate, verbally and using gestures, a collective method to circle the bottle together. They move to keep the line intact with those farther away from the bottle walking faster.
Applsci 13 08627 g001
Figure 2. Pupil Core eye tracker device with three cameras. The two cameras, one for each eye, are circled in yellow and the world camera is circled in red.
Figure 2. Pupil Core eye tracker device with three cameras. The two cameras, one for each eye, are circled in yellow and the world camera is circled in red.
Applsci 13 08627 g002
Figure 3. The azimuthal angle from the Pupil Player software and manual measurement of the azimuthal angle. (a) Snapshot from the Pupil Player software while performing the collective circular exercise while the subject is standing near the bottle. The green dot surrounded by a yellow circle marks this subject’s gaze. The curved blue and yellow arrows that are superimposed on the image indicate the directions of the subjects’ movement. The angular distance is the azimuthal angle between the direction of the subject’s gaze and the direction of the subjects’ line (yellow dot). Insets: a single frame of the binocular tracking; red indicates confidence > 0.6. (b) Example of a physical verification of the azimuthal angles calculated by the tracker’s software.
Figure 3. The azimuthal angle from the Pupil Player software and manual measurement of the azimuthal angle. (a) Snapshot from the Pupil Player software while performing the collective circular exercise while the subject is standing near the bottle. The green dot surrounded by a yellow circle marks this subject’s gaze. The curved blue and yellow arrows that are superimposed on the image indicate the directions of the subjects’ movement. The angular distance is the azimuthal angle between the direction of the subject’s gaze and the direction of the subjects’ line (yellow dot). Insets: a single frame of the binocular tracking; red indicates confidence > 0.6. (b) Example of a physical verification of the azimuthal angles calculated by the tracker’s software.
Applsci 13 08627 g003
Figure 4. Snapshots from the world camera recording of an active subject in the collective circular exercise, while he was standing closest to the center and wearing the eye-tracking device. The green dot represents the subject’s gaze. (a) The subject’s gaze is ahead of the line of subjects. (b) The subject‘s gaze is at the line (showed on the participant who was standing next to him).
Figure 4. Snapshots from the world camera recording of an active subject in the collective circular exercise, while he was standing closest to the center and wearing the eye-tracking device. The green dot represents the subject’s gaze. (a) The subject’s gaze is ahead of the line of subjects. (b) The subject‘s gaze is at the line (showed on the participant who was standing next to him).
Applsci 13 08627 g004
Figure 5. A snapshot from the world camera of a passive subject who was watching the collective circular exercise, where the green dot represents the subject’s gaze. The passive observer in this figure stands in front of the bottle. (a) The subjects move while facing the passive subject. The subject’s gaze is at the line of subjects. (b) The subject’s gaze is ahead of the line of subjects. (c) The subject’s gaze is behind the line of subjects. (d) The subjects move with their backs to the passive subject. The subject’s gaze is at the line of subjects.
Figure 5. A snapshot from the world camera of a passive subject who was watching the collective circular exercise, where the green dot represents the subject’s gaze. The passive observer in this figure stands in front of the bottle. (a) The subjects move while facing the passive subject. The subject’s gaze is at the line of subjects. (b) The subject’s gaze is ahead of the line of subjects. (c) The subject’s gaze is behind the line of subjects. (d) The subjects move with their backs to the passive subject. The subject’s gaze is at the line of subjects.
Applsci 13 08627 g005
Figure 6. Distribution of gaze direction across three categories—at, ahead of, or behind the line. The vertical axis is the average percent of time in each movement category. The different colors represent the two human observers. (a) While passively watching the collective circular exercise. (b) While actively performing the collective circular exercise.
Figure 6. Distribution of gaze direction across three categories—at, ahead of, or behind the line. The vertical axis is the average percent of time in each movement category. The different colors represent the two human observers. (a) While passively watching the collective circular exercise. (b) While actively performing the collective circular exercise.
Applsci 13 08627 g006
Figure 7. The angular distance over time between the subjects’ gaze and the line of the subjects while performing the collective circular exercise—one graph per subject. Left, four subjects who stood near the bottle; right, three subjects who stood at the end of the subjects’ line. Green, trial 1; Blue, trial 3. The subject depicted in the bottom right (light colors) was the only subject in this group who was familiar with the concept of angular velocity before the experiment. Positive angles indicate looking ahead of the line; zero angles indicate looking at the line.
Figure 7. The angular distance over time between the subjects’ gaze and the line of the subjects while performing the collective circular exercise—one graph per subject. Left, four subjects who stood near the bottle; right, three subjects who stood at the end of the subjects’ line. Green, trial 1; Blue, trial 3. The subject depicted in the bottom right (light colors) was the only subject in this group who was familiar with the concept of angular velocity before the experiment. Positive angles indicate looking ahead of the line; zero angles indicate looking at the line.
Applsci 13 08627 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zohar, R.; Karp, O.; Ben-Joseph, T.; Ahissar, E. Predictive Eye Movements Characterize Active, Not Passive, Participation in the Collective Embodied Learning of a Scientific Concept. Appl. Sci. 2023, 13, 8627. https://doi.org/10.3390/app13158627

AMA Style

Zohar R, Karp O, Ben-Joseph T, Ahissar E. Predictive Eye Movements Characterize Active, Not Passive, Participation in the Collective Embodied Learning of a Scientific Concept. Applied Sciences. 2023; 13(15):8627. https://doi.org/10.3390/app13158627

Chicago/Turabian Style

Zohar, Roni, Ofer Karp, Tchiya Ben-Joseph, and Ehud Ahissar. 2023. "Predictive Eye Movements Characterize Active, Not Passive, Participation in the Collective Embodied Learning of a Scientific Concept" Applied Sciences 13, no. 15: 8627. https://doi.org/10.3390/app13158627

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop