Next Article in Journal
A Scientometric-Based Review of Traffic Signal Control Methods and Experiments Based on Connected Vehicles and Floating Car Data (FCD)
Previous Article in Journal
Investigating the Effect of CNTs on Early Age Hydration and Autogenous Shrinkage of Cement Composite
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Perception and Action under Different Stimulus Presentations: A Review of Eye-Tracking Studies with an Extended View on Possibilities of Virtual Reality

1
Movement Science Group, Institute for Sport Science, Philosophical Faculty II, Martin-Luther University Halle-Wittenberg, 06108 Halle (Saale), Germany
2
Department of Sports Engineering and Movement Science, Otto-von-Guericke-University Magdeburg, Institute III: Sports Science, Germany, 39104 Magdeburg, Germany
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(12), 5546; https://doi.org/10.3390/app11125546
Submission received: 10 May 2021 / Revised: 9 June 2021 / Accepted: 14 June 2021 / Published: 15 June 2021
(This article belongs to the Topic Extended Reality (XR): AR, VR, MR and Beyond)

Abstract

:
Visual anticipation is essential for performance in sports. This review provides information on the differences between stimulus presentations and motor responses in eye-tracking studies and considers virtual reality (VR), a new possibility to present stimuli. A systematic literature search on PubMed, ScienceDirect, IEEE Xplore, and SURF was conducted. The number of studies examining the influence of stimulus presentation (in situ, video) is deficient but still sufficient to describe differences in gaze behavior. The seven reviewed studies indicate that stimulus presentations can cause differences in gaze behavior. Further research should focus on displaying game situations via VR. The advantages of a scientific approach using VR are experimental control and repeatability. In addition, game situations could be standardized and movement responses could be included in the analysis.

1. Introduction

Sporting expertise is expressed through exceptional motor skills and abilities and reflected in the athletes’ cognitive ability [1] of lower levels (executive functions; e.g., [2]) and higher ones (attention [3], pattern recognition [4], and anticipation [5]). Anticipation and pattern recognition are essential parts of expert performance in many sports. The activities in this sports (e.g., soccer goalkeeping, volleyball) often have a strong spatial component. Athletes have to orientate in the room and pick up movement kinematics of opponents. The relevant factor in describing the requirements of the situation and the information processing of the athlete is gaze behavior. Eye-tracking technologies are often conducted in sports science studies examining gaze behavior. The method uses corneal reflections of a closely situated directed light source (often infrared) to track the pupil [6], aiming to infer fixations or other gaze parameters in sports. In addition to assessing the decision, eye-tracking data helps analyze anticipation and decision-making processes [7].
The studies, which deal with athletes’ perceptual-cognitive abilities of various sports (e.g., tennis), differ in presenting the visual stimuli or the sport-specific situation to be analyzed (e.g., tennis serve). The most common ways to present a stimulus to explore gaze behavior are the in situ perspective [8] and video presentation [9]. In early research, visual behavior was examined with the so-called occlusion technique [10]. In these studies, the opponent’s movements (e.g., free kick) were presented by showing clips which occluded either the opponent’s movements at various times (e.g., while hitting the ball) or various parts of the body or the ball [9]. The latest research paradigms attacked this approach to study anticipation and information-processing from an ecological perspective [11]. In his work on the methodological procedure in so-called representative study designs, Brunswik showed how important the relationship between humans and the environment appears in connection with perception [12]. In association with the studies with the occlusion technique, it is criticized that the researchers use stimuli that do not occur naturally in this way and that the researchers want to generalize the results.
Several studies have shown that expert athletes anticipate the opponent’s action earlier and more accurately than novices [13]. The focus lies on adequate reaction planning (laboratory studies) or standardization, a transfer to generality (field studies). Adequate reaction planning is mentioned in the perception–action coupling of Gibson [11]. This approach is closely related to how the reaction options specified in a study influence perception and the movement response. However, it lacks a more comprehensive analysis.
The most recent meta-analyses data revealed that there could be differences between laboratory and field studies concerning gaze behavior [1]. In addition, the heuristics of judgement can differ due to the selected research design [14,15].
Mann [1] presented significant effects for the type of stimulus presentation to identify experts and novices for response accuracy (p = 0.02), the number of fixations (p = 0.004), and fixation duration (p < 0.001), but not for response time (p = 0.22). “Although effect sizes are largest in the field, it is difficult to ascertain whether participants are responding to different stimuli, rendering reliable comparison highly problematic” ([1], p. 474). The limited perspective induces different visual perception than in situ settings [14]. In the in situ perspective, the gaze is more centralized due to test persons making head movements instead of saccades [16]. The main advantages of video clips for presenting a situation while tracking participants’ eye movements are standardization, repeatability and safety [17]. In the in situ perspective, there is a lack of standardization because of the real-life situation. Experimental psychology gave rise to the requirement to create task designs that are as representative as possible [12]. It has been recommended that task designs represent the organism’s natural environment [18]. The presentation of stimuli through videos reduces the in situ perspective (3-D perspective) to a two-dimensional display [1]. It thus reduces the image size and depth of the display [19].
The advantage of VR representation lies in the possibility of standardizing the presented stimuli and letting the test person interact with the virtual environment as a stimulus response [20]. New technological advances (VR tools with 3-D graphics; head-mounted displays (HMD); or rendered screen presentations) also make it possible to offer a sport-specific situation in a 3-D setting. The latest developments are also dedicated to vision–action coupling and enabling movements such as shooting a ball and including the actions into the analysis [21]. However, methods for testing the validity (academic research) and training possibility (sports training) of VR environments are, in general, lagging behind their adoption.
This narrative review aims to point out the differences in gaze behavior and visual search between the in situ perspectives, the video-based simulations, and VR, showing the opportunities and problems with VR settings for future study designs.
The four research questions are as follows: Is there a difference in gaze behavior if the action in sports is presented on a video screen or in situ (Q1)? The second question focuses on the movement response and the possibility of a natural action response to visual stimuli (Q2). The third question asks if and to what extent the perspective in video presentations has influenced perception and action (Q3)? The last question (Q4) asks for differences in gaze behavior between in situ design and VR studies.

2. Materials and Method

Because the state of research concerning studies comparing several stimulus presentations and the associated visual search strategies or eye movements is low and inconsistent, neither a systematic review nor a theoretical, qualitative meta-synthesis was implemented [22]. The literature search was conducted and documented based on the PRISMA guidelines using a narrative approach [23] without quantitative analysis. The review overviews the differences between stimulus presentation types (in situ, video-based, and VR) when examining eye movements (via eye-tracking).
Studies published from January 1990 up to September 2020 were searched in several ways. First, we searched the PubMed, ScienceDirect (Web of Science), IEEE Xplore, and SURF (national database) databases using relevant keywords in titles or abstracts of English-language journals. The search was restricted to the period from 1990 because of technical standards of eye-tracking equipment. The following keywords were used for literature search: (a) for perception: perceptual expertise, visual search, gaze behavior, eye movements, eye-tracking, and (b) for stimulus presentation and test design: test design, in-situ, VR, video presentation, simulation, visual cues, task specificity, task constraint, response. Furthermore, we searched on the website of relevant venues such as ETRA and identified articles’ references. Studies were included if they compared different stimulus presentations, gaze behavior, or the search strategy was recorded (except the study with implementing VR) and if the article was written in English.

3. Results

The search in the databases identified 1035 (ScienceDirect), 1220 (PubMed), 932 (IEEE Xplore), and 32 articles (SURF database). Screening the references of the identified articles revealed 27 other relevant articles. After the exclusion of duplications, 1807 journal articles remained. After titles and abstracts were screened, 1776 non-relevant articles remained. Thirty-one (full-text) articles were assessed for eligibility. Twenty-three more were excluded because eye-tracking was not performed in these studies. They dealt with training the visual search strategy, or the studies were concerned with the accuracy of the eye-tracking measurement in VR. Another study was excluded because it re-analyzed the data of an already included study. A total of seven (in situ vs. video presentation: n = 4; perspectives in video presentations: n = 2; reality vs. VR: n = 1) studies was included (ScienceDirect: n = 2; PubMed/SURF database: n = 1; Citations: n = 5; see Figure 1). Eye-tracking technology is a frequently researched topic in sports science. The conducted studies cover a wide range of different subject areas (see [7]). However, a meaningful representation of such studies focusing on a specialization on the influence of athletes’ perspectives (in situ vs. video presentation or in situ vs. aerial view) has been lacking so far.
Nevertheless, the research status allows for conclusions to be drawn about differences in gaze behavior. A reason for the weak state of research could be the complex execution of eye-tracking studies. Researchers have to create natural game situations or real opponents’ actions for a study with an in situ design. To create a video presentation design, they have to record the videos of actions, which is laborious. Because of this complexity, VR technologies come into focus.

3.1. In Situ vs. Video Presentation

According to Abernethy et al. [24], Hernández et al. [25] postulated that the laboratory settings might not accurately show the experts advantages due to “(a) removal of experience factor(s) are associated with actually performing the task in an ecologically valid setting, (b) the introduction of potential floor or ceiling effects in measurement variability, and (c) constraining the expert’s typical responses to either using different information to create a response or preventing access to information normally available in the performance context”. Understanding the processes that underpin a qualified decision requires experimental conditions that reproduce the real context and task specificity as closely as possible [26]. The researchers combine as many variables as possible from real situations under laboratory conditions to achieve the highest ecological validity [27]. The question in this context is to what extent the eye movements and visual search strategies differ in connection with the research design or presenting the visual stimuli.
Hernández et al. [25] (see Table 1) examined the effects of perceptual information on tennis coaches’ visual behavior and error detection. The study provides information on the first research question: To what extent does the stimulus presentation influence fixations, eye movements, and cognitive processes? The authors carried out three gaze behavior measures: a laboratory 2-D presentation (video-based), an in situ setting on the court, and another presentation in the laboratory (video-based). Ten male tennis coaches (five experienced; five novices) had to detect errors in second topspin services (10 times). The outcome measure was the verbalized error detection and gaze behavior parameters (number of fixations, fixation duration, and fixation locations). There were fewer fixations and a shorter fixation duration for the 3-D and the second 2-D presentations (p < 0.001). Experienced coaches showed fewer fixations than novices. The experiment results support the hypothesis that gaze behavior differences are expected due to different stimulus presentations (Q1).
Dicks et al. [14] analyzed the gaze behavior, responses (verbal, movement, and interceptive) and performance of eight male experienced goalkeepers. The stimulus that the goalkeepers had to react to was a penalty kick carried out by a trained penalty shooter. The mean fixation duration, the mean number of fixation locations, and the mean number of fixations were measured. The stimuli were presented in an in-situ setting and laboratory (video-based). Significant differences for the condition for fixation duration (F [4,23] = 3.117, p < 0.05) and for the number of fixation locations (F [4,23] = 4.218, p < 0.01) were detected. Significant differences in the different conditions with the number of fixations could not be shown (F [4,23] = 2.404, p = 0.08). The study shows that gaze behavior differences can arise from different stimulus presentation types and the subsequent response (verbal, movement, or interception). The study indicates that a separation between the vision for controlling perception and actions can be useful. Perception, measured by the test subjects’ eye movements, seems dependent on the following options for action (Q2).
Afonso et al. [28] examined the visual search behaviors and verbal reports during film-based and in situ representative tasks in volleyball players. Nine female experienced volleyball players participated in the experiment and acted as background defenders in six scenarios. The number of fixations, fixation duration, and fixation locations was measured. The settings could be characterized as in situ and video-based with a verbal response. Participants had shorter fixation durations for the video-based stimuli presentation (video-based: 659.57 ± 178.06; in situ: 728.11 ± 129.27; F [8,1] = 5.24, p = 0.02, ɳ2 = 0.05). No differences could be reported for the number of fixations (video-based: 5.15 ± 1.38; in situ: 5.35 ± 0.91; F [8,1] = 0.82, p = 0.37, ɳ2 = 0.01) and locations (video-based: 4.98 ± 1.09; in situ: 5.30 ± 0.86; F [8,1] = 2.77, p = 0.10, ɳ2 = 0.03). The study results show evidence for the hypothesis that eye movements depend on the type of stimulus presentation (Q1). According to this, the statement applies only to fixation duration but not to the number of fixations and locations.
The last study in this field of research was conducted by Zweuts et al. [29]. Thirteen adults (eight females) with no sports context were examined. Participants had to ride a high-quality bicycle path and a low-quality bicycle path (±700 m) by bike. The number of fixations and saccades between Areas of Interest was measured in the in situ and video conditions. Overall, a significant Pearson correlation coefficient of r = 0.507 (p < 0.001) could be found for dwell time (in %) between the laboratory and field conditions. A comparison of the two road conditions shows a significant Pearson correlation coefficient for the low-quality cycle path (r = 0.663; p < 0.001). However, this effect could not be confirmed for the high-quality cycle path (r = 0.030; p = 0.821). Under real conditions (field conditions), the participants showed an increased visual search along the sight’s vertical line. The authors conclude that the cognitive and motor requirements when cycling were more demanding in the field condition than in the laboratory because steering no longer played a role. The study again shows differences in gaze behavior between field and laboratory and a dependency on external task constraints (Q1 and Q2).

3.2. Different Perspectives in Video Presentations

A study conducted by Petit and Ripoll [30] (see Table 2) investigated two video presentation perspectives of simulated game scenes to estimate the effect of different stimuli presentations on expert perception and decision-making (forced-choice decision to pass or not to pass in response to game scenarios). The game scenes were shown from the broadcast point of view and the players’ perspective (experienced vs. inexperienced soccer players). The masked prime paradigm was used to examine which parts of the scenes play a decisive role in the perception process. The results show that skilled soccer players make faster decisions. Furthermore, the presentation from the players’ perspective (internal presentation mode) led to quicker and more precise decisions. The presentation perspective thus had a fundamental impact on decision latency and visual behavior. Considering Davids et al. [18], the task design, viewing perspective, and presentation mode should be orientated at the organism’s natural environment.
In the second reported study by Mann et al. [31], 19 skilled youth soccer players were examined, observing identical game situations from two different viewing perspectives. The first perspective was the player perspective showing the experience of a player in this situation. The second perspective was an aerial view of the game situation from an elevated point overlying the same location on the field. Soccer players had a more extensive search pattern in the aerial perspective with a higher rate of fixations per second (t [12] = 4.90, p < 0.001, d = 0.92). The search rate was shorter in the aerial view compared to the player perspective (t [12] = 4.90, p < 0.002, d = 0.75). There was also a significant effect of the viewing perspective on where they set their fixations (F [8,5] = 6.56, p = 0.027, ɳp2 = 0.91). For both perspectives, participants fixated on the correct option. However, in the player’s view, this correct option was chosen less frequently. The two studies demonstrate the differences in visual behavior and decision-making processes determined by the point of view or the perspective (Q3).
It can be summarized that the quality of the eye-tracking studies is defined by the compromise or the interplay between ecological validity (field study) and internal validity (laboratory study). The sport-specific and task-specific perception often runs synchronously with the motor action [32]. The investigation of visual perception and the associated motor reaction in the player’s perspective (in situ) with pupillary movements’ detection using eye-tracking glasses can be regarded as the gold standard [13]. The presentation using video clips shows a higher internal validity and has the advantage that significantly more variables can be controlled in this experiment. The video presentation set is the only way to show every participant in the same game situation (stimuli).

3.3. Reality vs. Virtual Reality

Virtual reality technology is gaining interest in many different areas such as rehabilitation, sport, education, or medicine [33]. Especially in a scientific context, VR offers new possibilities to examine and understand human perception and action.
The in situ presentation of stimuli is the most realistic type of simulation to examine gaze behavior. However, the realistic game situations could not be standardized because none of the players involved were able to show the same movement for a second time. For this standardization, researchers used video presentations to show game situations or sports actions as a stimulus. The disadvantage of video presentations is the non-realistic representation regarding the depth of vision and restriction of the field of view. Visual VR (HMD) offers the opportunity to show such natural sports situations differently. The fast growth of VR simulations in academic research led to the validation of training applications for transferable skills in surgery or navigation [34]. Other research groups focused on the possibility of VR technology to learn general physical skills [35].
First, however, the question should be clarified to what extent VR can support valid gaze behavior studies based on the narrative review’s identified study results. Ref. [36] experimented with gaze behavior and focused on the duel between a handball goalkeeper and a field player (thrower). The authors recorded the thrower’s movements to create an avatar based on the movements (motion capturing). In this case, it was not the eye movements compared but the gestures to ward off the ball. The authors examined to what extent the VR presentation produced the same movements for the same throw as the real situation in the field. The results show that the gestures did not differ between real and virtual environments (Q4). The study did not show a real comparison between reality and VR with an HMD presentation and excluded gaze behavior.
Nevertheless, the example shows a great advantage of VR for science: creating different scenarios with real-time interactions between the user and the VR, which cannot be created in reality [37]. In addition, the programmed procedures for an investigation ensure standardized and easy-to-control experimental conditions.
The visual input predominates VR application compared to the acoustic and haptic information [38]. Integrating further measurement systems such as the eye tracker [39] or the EEG [40] offers new possibilities to get a more in-depth insight into visual information processing. For this purpose, mobile eye trackers were integrated into various existing VR devices such as HMD’s or the Cave Automatic Virtual Environment (CAVE). However, this application requires a complex technical implementation. For this reason, some authors carried out studies to measure the accuracy and precision of the point of view made in the VR compared with those from the natural environment [39]. The results show no significant differences between the two forms of presentation mentioned and that the VR setting can be used to investigate gaze behavior in sport.
Furthermore, in perception and anticipation, occlusion techniques showed that, through spatial depth information, athletes experienced a more realistic feeling in the virtual environment than in a 2-D video presentation. As a result, they were able to react better to enemy attacks in martial arts [41]. In addition to the facts mentioned above, the athletes’ reactions and actions in the CAVE are more similar to those in reality than responses shown on a 2-D screen [42].
Moreover, [43] showed that additional VR training leads to an improvement in reaction behavior. These studies indicate that the use of VR in combination with the eye leads to new insights. Other gaze parameters such as saccades [44] and gaze fixations [45] can also be recorded in VR. It should also be noted that another great advantage of VR is represented in the evaluation of the eye movement data since the position of the objects targeted by vision is known in VR.
Nevertheless, negative aspects can also be detected in connection with VR and the examination of gaze behavior. Some studies have already been carried out to investigate the so-called vergence–accommodation conflict [46]. This phenomenon exists because the eye lens focuses the objects on the screen or projection (accommodation). In contrast, the eye muscles adjust the visual axis (fixation) to the 3-D image (vergence), either in front or behind the screen. These conflict of these two adoptions leads to fatigue in some persons using 3-D displays [47]. The potential fatigue has to be considered in VR studies in science. Another factor that has to be considered when using VR for such experiments is the lack of haptic feedback and locomotion space. Haptic feedback (feeling a ball, touching the racket in tennis) is critical in sports settings because the missing information could lead to a change in motor response and behavior. The space for locomotion is relevant for sports actions, which require a movement of the participant in the environment (moving to the ball in tennis return).

4. Discussion

The results on whether athletes differ in their gaze behavior if the stimuli are presented in situ or by video presentation (Q1) show a significant effect of this factor in the identified studies [8,14,25,29]. The fixation duration, fixation location, or the number of fixations may differ between the two types of study designs because of the modified depth perception in the video presentation [48]. In addition, the eye movements carried out by the test persons or athletes (concerning accommodation, vergence) differ in the in situ and the video presentation.
The possible action or reactions in response to the sequence also significantly influenced the pupil movements and the participants’ visual search strategy ([14]; Q2). Dicks et al. [49] discuss the nature of the relation between expertise and perception-action coupling using ideas from ecological psychology [11] and the framework of representative task design by Brunswick [12]. Dicks et al. [49] gives an example of perception-action coupling: “When analyzing the pickup of information used in the return of serve in tennis, participants should be exposed to the in-situ actions of an opponent and ball-flight while being allowed to use the attended information for action”. In the video simulation, the perception is separated from the real action. The research approaches usually allow a rudimentary response to the shown stimuli (e.g., [14]). The separation of perception and action gaze behavior or visual information pickup is different from reality.
In comparing different perspectives on the same game situation (Q3), differences in gaze behavior and visual search strategy could be identified [31]. These differences appear to be determined by the available information. For example, the aerial view shows more details on the open space because of the depth of the perspective. This information leads to a different search strategy to find the best decisions for the given game situation.
The results of the reviewed studies reveal many reasons for using VR in gaze behavior studies: (a) There are significant differences in gaze behavior between 2-D or video screen projection and in situ representation [8,14,25,28,29] and also between the different perspectives of the presentation [30,31]; (b) In the current literature, no significant differences between the in situ representation and the representation of the scenes in VR or 3-D are reported ([36]; Q4); (c) On the VR display, scenes and stimuli can be shown that cannot be shown in reality or that cannot be shown in a standardized manner ([37,50]; and (d) In addition to the VR presentation, supplementary examination methods that require a stationary or safe workplace can be used (eye-tracking: [39]; EEG: [40]).
In scientific studies, however, the fatigue caused by the (a) vergence–accommodation conflict should be considered in developing the research design and interpreting the results.
Another aspect which is lacking consideration is the perception–action paradigm in the context of sports situations. Most sporting movements and gaze movements require movement by the athlete himself. This means that the athlete moves in order, for example, to take a better angle to the opponent or teammate. Immersive designs consider this problem by using artificial locomotion (continuous forward movement) or teleporting (“jumping” from one location to another) [51]. However, using these techniques in the sports context is questionable because the complexity of motor control would be expanded. The player must learn how to move with a controller (teleporting) instead of just using his legs when moving more than just one or two meters.
In addition to the few studies that can be reported, the review has another limitation: the studies report findings of different sports (e.g., tennis, soccer). Gaze behavior could differ between these sports situations, which must be considered when interpreting the results. Therefore, the results cannot be generalized, but they provide an adequate overview because the gaze strategies of individual sports can be described as very similar [1].
Further research should also consider the (b) lack of haptic feedback and the requirement of space. Another question is if the established metrics for eye-tracking parameters (c) and if the established methodology in eye-tracking could be transferred to a VR setting (d).
Research should focus on the possibilities of conducting eye-tracking studies in a VR setting and show the issues and benefits of this technology. Studies examining gaze accuracy and precision in real-world and virtual reality are the first step, but further work is needed.

Author Contributions

Conceptualization, F.H.; methodology, F.H.; formal analysis, K.W.; resources, F.H. and K.W.; data curation, F.H. and K.W.; writing—original draft preparation, F.H.; writing—review and editing, K.W.; visualization, F.H.; supervision, K.W.; project administration, F.H.; funding acquisition, F.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

We acknowledge the financial support within the funding programme Open Access Publishing by the German Research Foundation (DFG).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mann, D.T.Y.; Williams, A.M.; Ward, P.; Janelle, C.M. Perceptual-cognitive expertise in sport: A meta-analysis. J. Sport Exerc. Psychol. 2007, 29, 457–478. [Google Scholar] [CrossRef] [PubMed]
  2. Vestberg, T.; Gustafson, R.; Maurex, L.; Ingvar, M.; Petrovic, P. Executive functions predict the success of top-soccer players. PLoS ONE 2012, 7, e34731. [Google Scholar] [CrossRef] [PubMed]
  3. Gray, R. Links Between Attention, Performance Pressure, and Movement in Skilled Motor Action. Curr. Dir. Psychol. Sci. 2011, 20, 301–306. [Google Scholar] [CrossRef]
  4. Roca, A.; Ford, P.R.; McRobert, A.P.; Williams, A.M. Perceptual-cognitive skills and their interaction as a function of task constraints in soccer. J. Sport Exerc. Psychol. 2013, 35, 144–155. [Google Scholar] [CrossRef]
  5. Loffing, F.; Cañal-Bruland, R. Anticipation in sport. Curr. Opin. Psychol. 2017, 16, 6–11. [Google Scholar] [CrossRef]
  6. Duchowski, A. (Ed.) Eye Tracking Methodology; Springer: London, UK, 2007. [Google Scholar]
  7. Hüttermann, S.; Noël, B.; Memmert, D. Eye tracking in high-performance sports: Evaluation of its application in expert athletes. Int. J. Comput. Sci. Sport 2018, 17, 182–203. [Google Scholar] [CrossRef] [Green Version]
  8. Afonso, J.; Garganta, J.; Mcrobert, A.; Williams, A.M.; Mesquita, I. The perceptual cognitive processes underpinning skilled performance in volleyball: Evidence from eye-movements and verbal reports of thinking involving an in situ representative task. J. Sports Sci. Med. 2012, 11, 339. [Google Scholar] [PubMed]
  9. Canal-Bruland, R.; van der Kamp, J.; Arkesteijn, M.; Janssen, R.G.; van Kesteren, J.; Savelsbergh, G.J.P. Visual search behaviour in skilled field-hockey goalkeepers. Int. J. Sport Psychol. 2010, 41, 327. [Google Scholar]
  10. Abernethy, B.; Russell, D.G. The relationship between expertise and visual search strategy in a racquet sport. Hum. Mov. Sci. 1987, 6, 283–319. [Google Scholar] [CrossRef]
  11. Gibson, J.J. The Ecological Approach to Visual Perception; Houghton, Mifflin and Company: Boston, MA, USA, 1979. [Google Scholar]
  12. Brunswik, E. Perception and the Representative Design of Psychological Experiments; University of California Press: Berkeley, CA, USA, 1956. [Google Scholar]
  13. Williams, A.M.; Ward, P.; Knowles, J.M.; Smeeton, N.J. Anticipation skill in a real-world task: Measurement, training, and transfer in tennis. J. Exp. Psychol. Appl. 2002, 8, 259. [Google Scholar] [CrossRef] [PubMed]
  14. Dicks, M.; Button, C.; Davids, K. Examination of gaze behaviors under in situ and video simulation task constraints reveals differences in information pickup for perception and action. Atten. Percept. Psychophys. 2010, 72, 706–720. [Google Scholar] [CrossRef] [Green Version]
  15. Hogarth, R.M.; Karelaia, N. Heuristic and linear models of judgment: Matching rules and environments. Psychol. Rev. 2007, 114, 733. [Google Scholar] [CrossRef] [Green Version]
  16. Kurz, J.; Munzert, J. How the experimental setting influences representativeness: A review of gaze behavior in football penalty takers. Front. Psychol. 2018, 9, 682. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Vickers, J.N. Perception, Cognition, and Decision Training: The Quiet Eye in Action; Human Kinetics: Champaign, IL, USA, 2007. [Google Scholar]
  18. Davids, K.; AraúJo, D.; Button, C.; Renshaw, I. Degenerate Brains, Indeterminate Behavior, and Representative Tasks: Implications for Experimental Design in Sport Psychology Research. In Handbook of Sport Psychology; Tenenbaum, G., Eklund, R.C., Eds.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2007; pp. 224–244. [Google Scholar] [CrossRef]
  19. Al-Abood, S.A.; Bennett, S.J.; Hernandez, F.M.; Ashford, D.; Davids, K. Effect of verbal instructions and image size on visual search strategies in basketball free throw shooting. J. Sports Sci. 2002, 20, 271–278. [Google Scholar] [CrossRef]
  20. Mueller, F.; Stevens, G.; Thorogood, A.; O’Brien, S.; Wulf, V. Sports over a distance. Pers. Ubiquitous Comput. 2007, 11, 633–645. [Google Scholar] [CrossRef]
  21. Beavan, A.; Chin, V.; Ryan, L.M.; Spielmann, J.; Mayer, J.; Skorski, S.; Meyer, T.; Fransen, J. A Longitudinal Analysis of the Executive Functions in High-Level Soccer Players. J. Sport Exerc. Psychol. 2020, 42, 349–357. [Google Scholar] [CrossRef]
  22. Sandelowski, M.; Docherty, S.; Emden, C. Qualitative metasynthesis: Issues and techniques. Res. Nurs. Health 1997, 20, 365–371. [Google Scholar] [CrossRef]
  23. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med. 2009, 6, e1000097. [Google Scholar] [CrossRef] [Green Version]
  24. Abernethy, B.; Thomas, K.T.; Thomas, J.T. Strategies for improving understanding of motor expertise [or mistakes we have made and things we have learned!!]. In Advances in Psychology, 2nd ed.; Starkes, J.L., Allard, F., Eds.; Elsevier: Amsterdam, The Netherlands, 1993; Volume 102, pp. 317–356. [Google Scholar]
  25. Hernández, F.J.M.; Romero, F.Á.; Vaíllo, R.R.; del Campo, V.L. Visual behaviour of tennis coaches in a court and video-based conditions. RICYDE. Rev. Int. Cienc. Deporte. 2006, 2, 29–41. [Google Scholar] [CrossRef]
  26. Ericsson, K.A.; Ward, P. Capturing the naturally occurring superior performance of experts in the laboratory: Toward a science of expert and exceptional performance. Curr. Dir. Psychol. Sci. 2007, 16, 346–350. [Google Scholar] [CrossRef]
  27. Cauraugh, J.H.; Janelle, C.M. Visual Search and Cue Utilization in Racket Sports: Interceptive Actions in Sport; Routledge Taylor & Francis Group: London, UK, 2002. [Google Scholar]
  28. Afonso, J.; Garganta, J.; Mcrobert, A.; Williams, M.; Mesquita, I. Visual search behaviours and verbal reports during film-based and in situ representative tasks in volleyball. Eur. J. Sport Sci. 2014, 14, 177–184. [Google Scholar] [CrossRef] [PubMed]
  29. Zeuwts, L.; Vansteenkiste, P.; Deconinck, F.; van Maarseveen, M.; Savelsbergh, G.; Cardon, G.; Lenoir, M. Is gaze behaviour in a laboratory context similar to that in real-life? A study in bicyclists. Transp. Res. Part F Traffic Psychol. Behav. 2016, 43, 131–140. [Google Scholar] [CrossRef]
  30. Petit, J.-P.; Ripoll, H. Scene perception and decision making in sport simulation: A masked priming investigation. Int. J. Sport Psychol. 2008, 39, 1–19. [Google Scholar]
  31. Mann, D.L.; Farrow, D.; Shuttleworth, R.; Hopwood, M.; MacMahon, C. The influence of viewing perspective on decision-making and visual search behaviour in an invasive sport. Int. J. Sport Psychol. 2009, 40, 546–564. [Google Scholar]
  32. Land, M.; Tatler, B. Looking and Acting: Vision and Eye Movements in Natural Behaviour; Oxford University Press: Oxford, UK, 2009. [Google Scholar]
  33. Akbaş, A.; Marszałek, W.; Kamieniarz, A.; Polechoński, J.; Słomka, K.J.; Juras, G. Application of Virtual Reality in Competitive Athletes–A Review. J. Hum. Kinet. 2019, 69, 5–16. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Basdogan, C.; Sedef, M.; Harders, M.; Wesarg, S. VR-based simulators for training in minimally invasive surgery. IEEE Comput. Graph. Appl. 2007, 27, 54–66. [Google Scholar] [CrossRef] [PubMed]
  35. Bailenson, J.N.; Yee, N.; Blascovich, J.; Beall, A.C.; Lundblad, N.; Jin, M. The Use of Immersive Virtual Reality in the Learning Sciences: Digital Transformations of Teachers, Students, and Social Context. J. Learn. Sci. 2008, 17, 102–141. [Google Scholar] [CrossRef] [Green Version]
  36. Bideau, B.; Kulpa, R.; Ménardais, S.; Fradet, L.; Multon, F.; Delamarche, P.; Arnaldi, B. Real handball goalkeeper vs. virtual handball thrower. Presence Teleoperat. Virtual Environ. 2003, 12, 411–421. [Google Scholar] [CrossRef]
  37. Bandow, N.; Witte, K.; Masik, S. Development and Evaluation of a Virtual Test Environment for Performing Reaction Tasks. Int. J. Comput. Sci. Sport 2012, 11, 4–15. [Google Scholar]
  38. Dörner, R.; Broll, W.; Grimm, P.; Jung, B. Virtual und Augmented Reality. In Grundlagen und Methoden der Virtuellen und Augmentierten Realität; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  39. Pastel, S.; Chen, C.-H.; Martin, L.; Naujoks, M.; Petri, K.; Witte, K. Comparison of gaze accuracy and precision in real-world and virtual reality. Virtual Real. 2020, 25, 175–189. [Google Scholar] [CrossRef]
  40. Tauscher, J.-P.; Schottky, F.W.; Grogorick, S.; Bittner, P.M.; Mustafa, M.; Magnor, M. Immersive EEG: Evaluating Electroencephalography in Virtual Reality. In Proceedings of the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 23–27 March 2019; IEEEVR, Ed.; IEEE: New York, NY, USA, 2019; pp. 1794–1800. [Google Scholar]
  41. Bandow, N.; Emmermacher, P.; Stucke, C.; Masik, S.; Witte, K. Comparison of a Video and a Virtual Based Environment Using the Temporal and Spatial Occlusion Technique for Studying Anticipation in Karate. Int. J. Comput. Sci. Sport 2014, 13, 1. [Google Scholar]
  42. Witte, K.; Emmermacher, P.; Bandow, N.; Masik, S. Usage of virtual reality technology to study reactions in karate-kumite. Int. J. Sports Sci. Eng. 2012, 6, 17–24. [Google Scholar]
  43. Petri, K.; Bandow, N.; Masik, S.; Witte, K. Improvement of Early Recognition of Attacks in Karate Kumite Due to Training in Virtual Reality. Sportarea 2019, 4, 294–308. [Google Scholar] [CrossRef]
  44. Cohen, M.A.; Botch, T.L.; Robertson, C.E. The limits of color awareness during active, real-world vision. Proc. Natl. Acad. Sci. USA 2020, 117, 13821–13827. [Google Scholar] [CrossRef] [PubMed]
  45. Roth, T.; Weier, M.; Hinkenjann, A.; Li, Y.; Slusallek, P. A Quality-Centered Analysis of Eye Tracking Data in Foveated Rendering. J. Eye Mov. Res. 2017, 10. [Google Scholar] [CrossRef]
  46. Turnbull, P.R.K.; Phillips, J.R. Ocular effects of virtual reality headset wear in young adults. Sci. Rep. 2017, 7, 16172. [Google Scholar] [CrossRef]
  47. Hoffman, D.M.; Girshick, A.R.; Akeley, K.; Banks, M.S. Vergence–accommodation conflicts hinder visual performance and cause visual fatigue. J. Vis. 2008, 8, 33. [Google Scholar] [CrossRef] [PubMed]
  48. Weigelt, K.; Wiemeyer, J. Depth perception and spatial presence experience in stereoscopic 3D sports broadcasts. In Proceedings of the 2012 International Conference on 3D Imaging (IC3D), Liège, Belgium, 3–5 December 2012; IEEE: New York, NY, USA, 2012; pp. 1–6. [Google Scholar]
  49. Dicks, M.; Davids, K.; Button, C. Representative task design for the study of perception and action in sport. Int. J. Sport Psychol. 2009, 40, 506. [Google Scholar]
  50. Covaci, A.; Olivier, A.-H.; Multon, F. Visual perspective and feedback guidance for vr free-throw training. IEEE Comput. Graph. Appl. 2015, 35, 55–65. [Google Scholar] [CrossRef]
  51. Keil, J.; Edler, D.; O’Meara, D.; Korte, A.; Dickmann, F. Effects of Virtual Reality Locomotion Techniques on Distance Estimations. ISPRS Int. J. Geo-Inf. 2021, 10, 150. [Google Scholar] [CrossRef]
Figure 1. PRISMA flow diagram of literature search.
Figure 1. PRISMA flow diagram of literature search.
Applsci 11 05546 g001
Table 1. Articles thematize differences between video and in situ stimulus presentation.
Table 1. Articles thematize differences between video and in situ stimulus presentation.
N =Population/Sports/MovementOutcome MeasureStimulus PresentationStatistics and Effects
Hernández et al., 200610 malesTennis coaches (five experienced Five novices); Error detection in second topspin service (10 times)Number of fixations, fixation duration2-D video screen; in situ presentation (on court); 2 D-video screenFixations ↓ Fixation duration↓ for 3-D and second 2-D presentation (p < 0.001); Novices more fixations than experienced
Dicks et al., 20108 males Experienced Goalkeepers; Penalty kicks Mean Fixation Duration, Mean Number of Fixation Locations, Mean Number of FixationsIn situ with verbal, movement and interceptive response; Video-based with verbal and movement responseSig. differences for condition: fixation duration (F [4,23] = 3.117, p < 0.05) and number of fixation locations (F [4,23] = 4.218, p < 0.01); No sig. differences: number of fixations (F [4,23] = 2.404, p = 0.08)
Afonso et al., 20149 females Experienced volleyball players; acted as background defenders (six scenarios)Number of fixations, fixation duration; number of locationsIn situ with verbal response; video-based with verbal responseFixation duration ↓ for video-based presentation (p = 0.02); No differences for number of fixations (p = 0.37) and locations (p = 0.10)
Zeuwts et al., 20168 females, 13 malesNo sports context population (adults); high quality bicycle path and a low-quality bicycle path (±700 m)Number of Fixations, Saccades between the AOI (Areas of Interest)In situ and video conditionSignificant correlation (r = 0.507, p < 0.001) for dwell time (in %) between the laboratory and field condition
↓ = decrease of factor.
Table 2. Articles that thematize different perspectives in video presentation.
Table 2. Articles that thematize different perspectives in video presentation.
N =Population/Sports/MovementOutcome MeasureStimulus PresentationStatistics and Effects
Petit & Ripol 2008UnknownExperienced vs. inexperienced soccer playersDecision latency and visual behavior (no further description)Broadcast point of view (external) vs. view of the players (internal)Internal presentation led to faster and more precise decisions; Significant influence of presentation mode on decision latency and visual behavior (no test statistics reported).
Mann et al. 200919 malesSkilled youth soccer playersNumber of fixations, Player perspective vs. aerial viewMore extensive search and higher rate of fixations in the aerial perspective (t [12] = 4.90, p < 0.001, d = 0.92); Shorter search rate in the aerial view (t [12] = 4.90, p < 0.002, d = 0.75);
Significant effect of the viewing perspective on fixation location (F [8,5] = 6.56, p = 0.027, ɳp2 = 0.91).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Heilmann, F.; Witte, K. Perception and Action under Different Stimulus Presentations: A Review of Eye-Tracking Studies with an Extended View on Possibilities of Virtual Reality. Appl. Sci. 2021, 11, 5546. https://doi.org/10.3390/app11125546

AMA Style

Heilmann F, Witte K. Perception and Action under Different Stimulus Presentations: A Review of Eye-Tracking Studies with an Extended View on Possibilities of Virtual Reality. Applied Sciences. 2021; 11(12):5546. https://doi.org/10.3390/app11125546

Chicago/Turabian Style

Heilmann, Florian, and Kerstin Witte. 2021. "Perception and Action under Different Stimulus Presentations: A Review of Eye-Tracking Studies with an Extended View on Possibilities of Virtual Reality" Applied Sciences 11, no. 12: 5546. https://doi.org/10.3390/app11125546

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop