Next Issue
Volume 3, December
Previous Issue
Volume 3, June
 
 

Vision, Volume 3, Issue 3 (September 2019) – 16 articles

Cover Story (view full-size image): When people track moving faces, they use their eyes to fixate on each “target” face one at a time to activate and refresh a high-resolution representation in the working memory. The resolution of target representations deteriorates with time. In the article, we review all published eye-tracking studies examining tracking of multiple moving objects. One main conclusion is that eye behavior differs significantly in tracking objects with distinct identities from that in tracking visually identical objects. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
14 pages, 443 KiB  
Article
Inefficient Eye Movements: Gamification Improves Task Execution, But Not Fixation Strategy
by Warren R. G. James, Josephine Reuther, Ellen Angus, Alasdair D. F. Clarke and Amelia R. Hunt
Vision 2019, 3(3), 48; https://doi.org/10.3390/vision3030048 - 18 Sep 2019
Cited by 4 | Viewed by 2512
Abstract
Decisions about where to fixate are highly variable and often inefficient. In the current study, we investigated whether such decisions would improve with increased motivation. Participants had to detect a discrimination target, which would appear in one of two boxes, but only after [...] Read more.
Decisions about where to fixate are highly variable and often inefficient. In the current study, we investigated whether such decisions would improve with increased motivation. Participants had to detect a discrimination target, which would appear in one of two boxes, but only after they chose a location to fixate. The distance between boxes determines which location to fixate to maximise the probability of being able to see the target: participants should fixate between the two boxes when they are close together, and on one of the two boxes when they are far apart. We “gamified” this task, giving participants easy-to-track rewards that were contingent on discrimination accuracy. Their decisions and performance were compared to previous results that were gathered in the absence of this additional motivation. We used a Bayesian beta regression model to estimate the size of the effect and associated variance. The results demonstrate that discrimination accuracy does indeed improve in the presence of performance-related rewards. However, there was no difference in eye movement strategy between the two groups, suggesting this improvement in accuracy was not due to the participants making more optimal eye movement decisions. Instead, the motivation encouraged participants to expend more effort on other aspects of the task, such as paying more attention to the boxes and making fewer response errors. Full article
(This article belongs to the Special Issue Selected Papers from the Scottish Vision Group Meeting 2019)
Show Figures

Figure 1

13 pages, 1041 KiB  
Article
An Adaptive Homeostatic Algorithm for the Unsupervised Learning of Visual Features
by Laurent U. Perrinet
Vision 2019, 3(3), 47; https://doi.org/10.3390/vision3030047 - 16 Sep 2019
Cited by 6 | Viewed by 3186
Abstract
The formation of structure in the visual system, that is, of the connections between cells within neural populations, is by and large an unsupervised learning process. In the primary visual cortex of mammals, for example, one can observe during development the formation of [...] Read more.
The formation of structure in the visual system, that is, of the connections between cells within neural populations, is by and large an unsupervised learning process. In the primary visual cortex of mammals, for example, one can observe during development the formation of cells selective to localized, oriented features, which results in the development of a representation in area V1 of images’ edges. This can be modeled using a sparse Hebbian learning algorithms which alternate a coding step to encode the information with a learning step to find the proper encoder. A major difficulty of such algorithms is the joint problem of finding a good representation while knowing immature encoders, and to learn good encoders with a nonoptimal representation. To solve this problem, this work introduces a new regulation process between learning and coding which is motivated by the homeostasis processes observed in biology. Such an optimal homeostasis rule is implemented by including an adaptation mechanism based on nonlinear functions that balance the antagonistic processes that occur at the coding and learning time scales. It is compatible with a neuromimetic architecture and allows for a more efficient emergence of localized filters sensitive to orientation. In addition, this homeostasis rule is simplified by implementing a simple heuristic on the probability of activation of neurons. Compared to the optimal homeostasis rule, numerical simulations show that this heuristic allows to implement a faster unsupervised learning algorithm while retaining much of its effectiveness. These results demonstrate the potential application of such a strategy in machine learning and this is illustrated by showing the effect of homeostasis in the emergence of edge-like filters for a convolutional neural network. Full article
Show Figures

Figure 1

16 pages, 909 KiB  
Review
Seeing Beyond Salience and Guidance: The Role of Bias and Decision in Visual Search
by Alasdair D. F. Clarke, Anna Nowakowska and Amelia R. Hunt
Vision 2019, 3(3), 46; https://doi.org/10.3390/vision3030046 - 11 Sep 2019
Cited by 7 | Viewed by 3770
Abstract
Visual search is a popular tool for studying a range of questions about perception and attention, thanks to the ease with which the basic paradigm can be controlled and manipulated. While often thought of as a sub-field of vision science, search tasks are [...] Read more.
Visual search is a popular tool for studying a range of questions about perception and attention, thanks to the ease with which the basic paradigm can be controlled and manipulated. While often thought of as a sub-field of vision science, search tasks are significantly more complex than most other perceptual tasks, with strategy and decision playing an essential, but neglected, role. In this review, we briefly describe some of the important theoretical advances about perception and attention that have been gained from studying visual search within the signal detection and guided search frameworks. Under most circumstances, search also involves executing a series of eye movements. We argue that understanding the contribution of biases, routines and strategies to visual search performance over multiple fixations will lead to new insights about these decision-related processes and how they interact with perception and attention. We also highlight the neglected potential for variability, both within and between searchers, to contribute to our understanding of visual search. The exciting challenge will be to account for variations in search performance caused by these numerous factors and their interactions. We conclude the review with some recommendations for ways future research can tackle these challenges to move the field forward. Full article
(This article belongs to the Special Issue Eye Movements and Visual Cognition)
Show Figures

Figure 1

16 pages, 244 KiB  
Article
What Can Eye Movements Tell Us about Higher Level Comprehension?
by Anne E. Cook and Wei Wei
Vision 2019, 3(3), 45; https://doi.org/10.3390/vision3030045 - 06 Sep 2019
Cited by 17 | Viewed by 3127
Abstract
The majority of eye tracking studies in reading are on issues dealing with word level or sentence level comprehension. By comparison, relatively few eye tracking studies of reading examine questions related to higher level comprehension in processing of longer texts. We present data [...] Read more.
The majority of eye tracking studies in reading are on issues dealing with word level or sentence level comprehension. By comparison, relatively few eye tracking studies of reading examine questions related to higher level comprehension in processing of longer texts. We present data from an eye tracking study of anaphor resolution in order to examine specific issues related to this discourse phenomenon and to raise more general methodological and theoretical issues in eye tracking studies of discourse processing. This includes matters related to the design of materials as well as the interpretation of measures with regard to underlying comprehension processes. In addition, we provide several examples from eye tracking studies of discourse to demonstrate the kinds of questions that may be addressed with this methodology, particularly with respect to the temporality of processing in higher level comprehension and how such questions correspond to recent theoretical arguments in the field. Full article
(This article belongs to the Special Issue Eye Movements and Visual Cognition)
24 pages, 3050 KiB  
Article
Spatial Frequency Tuning and Transfer of Perceptual Learning for Motion Coherence Reflects the Tuning Properties of Global Motion Processing
by Jordi M. Asher, Vincenzo Romei and Paul B. Hibbard
Vision 2019, 3(3), 44; https://doi.org/10.3390/vision3030044 - 02 Sep 2019
Cited by 1 | Viewed by 4715
Abstract
Perceptual learning is typically highly specific to the stimuli and task used during training. However, recently, it has been shown that training on global motion can transfer to untrained tasks, reflecting the generalising properties of mechanisms at this level of processing. We investigated [...] Read more.
Perceptual learning is typically highly specific to the stimuli and task used during training. However, recently, it has been shown that training on global motion can transfer to untrained tasks, reflecting the generalising properties of mechanisms at this level of processing. We investigated (i) if feedback was required for learning in a motion coherence task, (ii) the transfer across the spatial frequency of training on a global motion coherence task and (iii) the transfer of this training to a measure of contrast sensitivity. For our first experiment, two groups, with and without feedback, trained for ten days on a broadband motion coherence task. Results indicated that feedback was a requirement for robust learning. For the second experiment, training consisted of five days of direction discrimination using one of three motion coherence stimuli (where individual elements were comprised of either broadband Gaussian blobs or low- or high-frequency random-dot Gabor patches), with trial-by-trial auditory feedback. A pre- and post-training assessment was conducted for each of the three types of global motion coherence conditions and high and low spatial frequency contrast sensitivity (both without feedback). Our training paradigm was successful at eliciting improvement in the trained tasks over the five days. Post-training assessments found evidence of transfer for the motion coherence task exclusively for the group trained on low spatial frequency elements. For the contrast sensitivity tasks, improved performance was observed for low- and high-frequency stimuli, following motion coherence training with broadband stimuli, and for low-frequency stimuli, following low-frequency training. Our findings are consistent with perceptual learning, which depends on the global stage of motion processing in higher cortical areas, which is broadly tuned for spatial frequency, with a preference for low frequencies. Full article
(This article belongs to the Special Issue Selected Papers from the Scottish Vision Group Meeting 2019)
Show Figures

Figure 1

13 pages, 1684 KiB  
Article
Reading and Misleading: Changes in Head and Eye Movements Reveal Attentional Orienting in a Social Context
by Tom Foulsham, Monika Gejdosova and Laura Caunt
Vision 2019, 3(3), 43; https://doi.org/10.3390/vision3030043 - 27 Aug 2019
Cited by 3 | Viewed by 2653
Abstract
Social attention describes how observers orient to social information and exhibit behaviors such as gaze following. These behaviors are examples of how attentional orienting may differ when in the presence of other people, although they have typically been studied without actual social presence. [...] Read more.
Social attention describes how observers orient to social information and exhibit behaviors such as gaze following. These behaviors are examples of how attentional orienting may differ when in the presence of other people, although they have typically been studied without actual social presence. In the present study we ask whether orienting, as measured by head and eye movements, will change when participants are trying to mislead or hide their attention from a bystander. In two experiments, observers performed a preference task while being video-recorded, and subsequent participants were asked to guess the response of the participant based on a video of the head and upper body. In a second condition, observers were told to try to mislead the “guesser”. The results showed that participants’ preference responses could be guessed from videos of the head and, critically, that participants spontaneously changed their orienting behavior in order to mislead by reducing the rate at which they made large head movements. Masking the eyes with sunglasses suggested that head movements were most important in our setup. This indicates that head and eye movements can be used flexibly according to the socio-communicative context. Full article
(This article belongs to the Special Issue Visual Orienting and Conscious Perception)
Show Figures

Figure 1

14 pages, 1425 KiB  
Article
Task-Irrelevant Features in Visual Working Memory Influence Covert Attention: Evidence from a Partial Report Task
by Rebecca M. Foerster and Werner X. Schneider
Vision 2019, 3(3), 42; https://doi.org/10.3390/vision3030042 - 27 Aug 2019
Cited by 3 | Viewed by 3810
Abstract
Selecting a target based on a representation in visual working memory (VWM) affords biasing covert attention towards objects with memory-matching features. Recently, we showed that even task-irrelevant features of a VWM template bias attention. Specifically, when participants had to saccade to a cued [...] Read more.
Selecting a target based on a representation in visual working memory (VWM) affords biasing covert attention towards objects with memory-matching features. Recently, we showed that even task-irrelevant features of a VWM template bias attention. Specifically, when participants had to saccade to a cued shape, distractors sharing the cue’s search-irrelevant color captured the eyes. While a saccade always aims at one target location, multiple locations can be attended covertly. Here, we investigated whether covert attention is captured similarly as the eyes. In our partial report task, each trial started with a shape-defined search cue, followed by a fixation cross. Next, two colored shapes, each including a letter, appeared left and right from fixation, followed by masks. The letter inside that shape matching the preceding cue had to be reported. In Experiment 1, either target, distractor, both, or no object matched the cue’s irrelevant color. Target-letter reports were most frequent in target-match trials and least frequent in distractor-match trials. Irrelevant cue and target color never matched in Experiment 2. Still, participants reported the distractor more often to the target’s disadvantage, when cue and distractor color matched. Thus, irrelevant features of a VWM template can influence covert attention in an involuntarily object-based manner when searching for trial-wise varying targets. Full article
Show Figures

Figure 1

14 pages, 1806 KiB  
Article
Treat and Extend Treatment Interval Patterns with Anti-VEGF Therapy in nAMD Patients
by Adrian Skelly, Vladimir Bezlyak, Gerald Liew, Elisabeth Kap and Alexandros Sagkriotis
Vision 2019, 3(3), 41; https://doi.org/10.3390/vision3030041 - 26 Aug 2019
Cited by 19 | Viewed by 4111
Abstract
Treat and extend (T&E) is a standard treatment regimen for treating neovascular age-related macular degeneration (nAMD) with anti-vascular endothelial growth factors (anti-VEGFs), but the treatment intervals attained are not well documented. This retrospective, non-comparative, non-randomised study of eyes with nAMD classified treatment interval [...] Read more.
Treat and extend (T&E) is a standard treatment regimen for treating neovascular age-related macular degeneration (nAMD) with anti-vascular endothelial growth factors (anti-VEGFs), but the treatment intervals attained are not well documented. This retrospective, non-comparative, non-randomised study of eyes with nAMD classified treatment interval sequences in a T&E cohort in Australia using Electronic Medical Records (EMR) data. We analysed data from 632 treatment-naïve eyes from 555 patients injected with ranibizumab, aflibercept or unlicensed bevacizumab between January 2012 and June 2016 (mean baseline age 78.0). Eyes were categorised into non-overlapping clusters of interval sequences based on the first 12 months of follow-up. We identified 523 different treatment interval sequences. The largest cluster of 197 (31.5%) eyes attained an 8-week treatment interval before dropping to a shorter frequency, followed by 168 (26.8%) eyes that did not reach or attained a single 8-week interval at the end of the study period. A total of 65 (10.4%) and 83 (13.3%) eyes reached and sustained (≥2 consecutive injection intervals of the same length) an 8 and 12 weekly interval, respectively. This study demonstrates highly individualised treatment patterns in the first year of anti-VEGF therapy in Australia using T&E regimens, with the majority of patients requiring more frequent injections than once every 8 weeks. Full article
Show Figures

Figure 1

21 pages, 650 KiB  
Article
Inhibitory and Facilitatory Cueing Effects: Competition between Exogenous and Endogenous Mechanisms
by Alfred Lim, Vivian Eng, Caitlyn Osborne, Steve M. J. Janssen and Jason Satel
Vision 2019, 3(3), 40; https://doi.org/10.3390/vision3030040 - 22 Aug 2019
Cited by 1 | Viewed by 2869
Abstract
Inhibition of return is characterized by delayed responses to previously attended locations when the cue-target onset asynchrony (CTOA) is long enough. However, when cues are predictive of a target’s location, faster reaction times to cued as compared to uncued targets are normally observed. [...] Read more.
Inhibition of return is characterized by delayed responses to previously attended locations when the cue-target onset asynchrony (CTOA) is long enough. However, when cues are predictive of a target’s location, faster reaction times to cued as compared to uncued targets are normally observed. In this series of experiments investigating saccadic reaction times, we manipulated the cue predictability to 25% (counterpredictive), 50% (nonpredictive), and 75% (predictive) to investigate the interaction between predictive endogenous facilitatory (FCEs) and inhibitory cueing effects (ICEs). Overall, larger ICEs were seen in the counterpredictive condition than in the nonpredictive condition, and no ICE was found in the predictive condition. Based on the hypothesized additivity of FCEs and ICEs, we reasoned that the null ICEs observed in the predictive condition are the result of two opposing mechanisms balancing each other out, and the large ICEs observed with counterpredictive cueing can be attributed to the combination of endogenous facilitation at uncued locations with inhibition at cued locations. Our findings suggest that the endogenous activity contributed by cue predictability can reduce the overall inhibition observed when the mechanisms occur at the same location, or enhance behavioral inhibition when the mechanisms occur at opposite locations. Full article
Show Figures

Figure 1

5 pages, 539 KiB  
Perspective
The Sun/Moon Illusion in a Medieval Irish Astronomical Tract
by Helen E. Ross
Vision 2019, 3(3), 39; https://doi.org/10.3390/vision3030039 - 07 Aug 2019
Viewed by 3383
Abstract
The Irish Astronomical Tract is a 14th–15th century Gaelic document, based mainly on a Latin translation of the eighth-century Jewish astronomer Messahala. It contains a passage about the sun illusion—the apparent enlargement of celestial bodies when near the horizon compared to higher in [...] Read more.
The Irish Astronomical Tract is a 14th–15th century Gaelic document, based mainly on a Latin translation of the eighth-century Jewish astronomer Messahala. It contains a passage about the sun illusion—the apparent enlargement of celestial bodies when near the horizon compared to higher in the sky. This passage occurs in a chapter concerned with proving that the Earth is a globe rather than flat. Here the author denies that the change in size is caused by a change in the sun’s distance, and instead ascribes it (incorrectly) to magnification by atmospheric vapours, likening it to the bending of light when looking from air to water or through glass spectacles. This section does not occur in the Latin version of Messahala. The Irish author may have based the vapour account on Aristotle, Ptolemy or Cleomedes, or on later authors that relied on them. He seems to have been unaware of alternative perceptual explanations. The refraction explanation persists today in folk science. Full article
(This article belongs to the Special Issue Selected Papers from the Scottish Vision Group Meeting 2019)
Show Figures

Figure 1

18 pages, 312 KiB  
Review
The Nature of Unconscious Attention to Subliminal Cues
by Seema Prasad and Ramesh Kumar Mishra
Vision 2019, 3(3), 38; https://doi.org/10.3390/vision3030038 - 01 Aug 2019
Cited by 12 | Viewed by 4688
Abstract
Attentional selection in humans is mostly determined by what is important to them or by the saliency of the objects around them. How our visual and attentional system manage these various sources of attentional capture is one of the most intensely debated issues [...] Read more.
Attentional selection in humans is mostly determined by what is important to them or by the saliency of the objects around them. How our visual and attentional system manage these various sources of attentional capture is one of the most intensely debated issues in cognitive psychology. Along with the traditional dichotomy of goal-driven and stimulus-driven theories, newer frameworks such as reward learning and selection history have been proposed as well to understand how a stimulus captures attention. However, surprisingly little is known about the different forms of attentional control by information that is not consciously accessible to us. In this article, we will review several studies that have examined attentional capture by subliminal cues. We will specifically focus on spatial cuing studies that have shown through response times and eye movements that subliminal cues can affect attentional selection. A majority of these studies have argued that attentional capture by subliminal cues is entirely automatic and stimulus-driven. We will evaluate their claims of automaticity and contrast them with a few other studies that have suggested that orienting to unconscious cues proceeds in a manner that is contingent with the top-down goals of the individual. Resolving this debate has consequences for understanding the depths and the limits of unconscious processing. It has implications for general theories of attentional selection as well. In this review, we aim to provide the current status of research in this domain and point out open questions and future directions. Full article
(This article belongs to the Special Issue Eye Movements and Visual Cognition)
26 pages, 909 KiB  
Review
Eye Behavior During Multiple Object Tracking and Multiple Identity Tracking
by Jukka Hyönä, Jie Li and Lauri Oksama
Vision 2019, 3(3), 37; https://doi.org/10.3390/vision3030037 - 31 Jul 2019
Cited by 14 | Viewed by 4861
Abstract
We review all published eye-tracking studies to date that have used eye movements to examine multiple object (MOT) or multiple identity tracking (MIT). In both tasks, observers dynamically track multiple moving objects. In MOT the objects are identical, whereas in MIT they have [...] Read more.
We review all published eye-tracking studies to date that have used eye movements to examine multiple object (MOT) or multiple identity tracking (MIT). In both tasks, observers dynamically track multiple moving objects. In MOT the objects are identical, whereas in MIT they have distinct identities. In MOT, observers prefer to fixate on blank space, which is often the center of gravity formed by the moving targets (centroid). In contrast, in MIT observers have a strong preference for the target-switching strategy, presumably to refresh and maintain identity-location bindings for the targets. To account for the qualitative differences between MOT and MIT, two mechanisms have been posited, a position tracking (MOT) and an identity tracking (MOT & MIT) mechanism. Eye-tracking studies of MOT have also demonstrated that observers execute rescue saccades toward targets in danger of becoming occluded or are about to change direction after a collision. Crowding attracts the eyes close to it in order to increase visual acuity for the crowded objects to prevent target loss. It is suggested that future studies should concentrate more on MIT, as MIT more closely resembles tracking in the real world. Full article
(This article belongs to the Special Issue Eye Movements and Visual Cognition)
Show Figures

Figure 1

15 pages, 711 KiB  
Article
Grasping Discriminates between Object Sizes Less Not More Accurately than the Perceptual System
by Frederic Göhringer, Miriam Löhr-Limpens, Constanze Hesse and Thomas Schenk
Vision 2019, 3(3), 36; https://doi.org/10.3390/vision3030036 - 19 Jul 2019
Cited by 3 | Viewed by 3640
Abstract
Ganel, Freud, Chajut, and Algom (2012) demonstrated that maximum grip apertures (MGAs) differ significantly when grasping perceptually identical objects. From this finding they concluded that the visual size information used by the motor system is more accurate than the visual size information available [...] Read more.
Ganel, Freud, Chajut, and Algom (2012) demonstrated that maximum grip apertures (MGAs) differ significantly when grasping perceptually identical objects. From this finding they concluded that the visual size information used by the motor system is more accurate than the visual size information available to the perceptual system. A direct comparison between the accuracy in the perception and the action system is, however, problematic, given that accuracy in the perceptual task is measured using a dichotomous variable, while accuracy in the visuomotor task is determined using a continuous variable. We addressed this problem by dichotomizing the visuomotor measures. Using this approach, our results show that size discrimination in grasping is in fact inferior to perceptual discrimination therefore contradicting the original suggestion put forward by Ganel and colleagues. Full article
(This article belongs to the Special Issue Visual Control of Action)
Show Figures

Figure 1

14 pages, 659 KiB  
Review
Regressions during Reading
by Albrecht W. Inhoff, Andrew Kim and Ralph Radach
Vision 2019, 3(3), 35; https://doi.org/10.3390/vision3030035 - 09 Jul 2019
Cited by 15 | Viewed by 4640
Abstract
Readers occasionally move their eyes to prior text. We distinguish two types of these movements (regressions). One type consists of relatively large regressions that seek to re-process prior text and to revise represented linguistic content to improve comprehension. The other consists of relatively [...] Read more.
Readers occasionally move their eyes to prior text. We distinguish two types of these movements (regressions). One type consists of relatively large regressions that seek to re-process prior text and to revise represented linguistic content to improve comprehension. The other consists of relatively small regressions that seek to correct inaccurate or premature oculomotor programming to improve visual word recognition. Large regressions are guided by spatial and linguistic knowledge, while small regressions appear to be exclusively guided by knowledge of spatial location. There are substantial individual differences in the use of regressions, and college-level readers often do not regress even when this would improve sentence comprehension. Full article
(This article belongs to the Special Issue Eye Movements and Visual Cognition)
Show Figures

Figure 1

14 pages, 3350 KiB  
Article
Factors Influencing Pseudo-Accommodation—The Difference between Subjectively Reported Range of Clear Focus and Objectively Measured Accommodation Range
by Sandeep K. Dhallu, Amy L. Sheppard, Tom Drew, Toshifumi Mihashi, Juan F. Zapata-Díaz, Hema Radhakrishnan, D. Robert Iskander and James S. Wolffsohn
Vision 2019, 3(3), 34; https://doi.org/10.3390/vision3030034 - 28 Jun 2019
Cited by 6 | Viewed by 3714
Abstract
The key determinants of the range of clear focus in pre-presbyopes and their relative contributions to the difference between subjective range of focus and objective accommodation assessments have not been previously quantified. Fifty participants (aged 33.0 ± 6.4 years) underwent simultaneous monocular subjective [...] Read more.
The key determinants of the range of clear focus in pre-presbyopes and their relative contributions to the difference between subjective range of focus and objective accommodation assessments have not been previously quantified. Fifty participants (aged 33.0 ± 6.4 years) underwent simultaneous monocular subjective (visual acuity measured with an electronic test-chart) and objective (dynamic accommodation measured with an Aston open-field aberrometer) defocus curve testing for lenses between +2.00 to −10.00 DS in +0.50 DS steps in a randomized order. Pupil diameter and ocular aberrations (converted to visual metrics normalized for pupil size) at each level of blur were measured. The difference between objective range over which the power of the crystalline lens changes and the subjective range of clear focus was quantified and the results modelled using pupil size, refractive error, tolerance to blur, and ocular aberrations. The subjective range of clear focus was principally accounted for by age (46.4%) and pupil size (19.3%). The objectively assessed accommodative range was also principally accounted for by age (27.6%) and pupil size (15.4%). Over one-quarter (26.0%) of the difference between objective accommodation and subjective range of clear focus was accounted for by age (14.0%) and spherical aberration at maximum accommodation (12.0%). There was no significant change in the objective accommodative response (F = 1.426, p = 0.229) or pupil size (F = 0.799, p = 0.554) of participants for levels of defocus above their amplitude of accommodation. Pre-presbyopes benefit from an increased subjective range of clear vision beyond their objective accommodation due in part to neural factors, resulting in a measured depth-of-focus of, on average, 1.0 D. Full article
(This article belongs to the Special Issue Physiological Optics of Accommodation and Presbyopia)
Show Figures

Figure 1

20 pages, 285 KiB  
Review
The Changing Landscape: High-Level Influences on Eye Movement Guidance in Scenes
by Carrick C. Williams and Monica S. Castelhano
Vision 2019, 3(3), 33; https://doi.org/10.3390/vision3030033 - 28 Jun 2019
Cited by 25 | Viewed by 4602
Abstract
The use of eye movements to explore scene processing has exploded over the last decade. Eye movements provide distinct advantages when examining scene processing because they are both fast and spatially measurable. By using eye movements, researchers have investigated many questions about scene [...] Read more.
The use of eye movements to explore scene processing has exploded over the last decade. Eye movements provide distinct advantages when examining scene processing because they are both fast and spatially measurable. By using eye movements, researchers have investigated many questions about scene processing. Our review will focus on research performed in the last decade examining: (1) attention and eye movements; (2) where you look; (3) influence of task; (4) memory and scene representations; and (5) dynamic scenes and eye movements. Although typically addressed as separate issues, we argue that these distinctions are now holding back research progress. Instead, it is time to examine the intersections of these seemingly separate influences and examine the intersectionality of how these influences interact to more completely understand what eye movements can tell us about scene processing. Full article
(This article belongs to the Special Issue Eye Movements and Visual Cognition)
Previous Issue
Next Issue
Back to TopTop