Next Issue
Volume 3, September
Previous Issue
Volume 3, March
 
 

Vision, Volume 3, Issue 2 (June 2019) – 21 articles

Cover Story (view full-size image): We can only attend to a fraction of the visual input available to us at any given moment. For this reason, visual attention is guided through scenes in real time via eye movements. How does the brain determine which scene regions and elements should be attended to at any given moment? Our research suggests that attentional priority to regions in a scene is assigned based on semantic content, not image feature content. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 1645 KiB  
Review
Eye Movements in Medical Image Perception: A Selective Review of Past, Present and Future
by Chia-Chien Wu and Jeremy M. Wolfe
Vision 2019, 3(2), 32; https://doi.org/10.3390/vision3020032 - 20 Jun 2019
Cited by 21 | Viewed by 5506
Abstract
The eye movements of experts, reading medical images, have been studied for many years. Unlike topics such as face perception, medical image perception research needs to cope with substantial, qualitative changes in the stimuli under study due to dramatic advances in medical imaging [...] Read more.
The eye movements of experts, reading medical images, have been studied for many years. Unlike topics such as face perception, medical image perception research needs to cope with substantial, qualitative changes in the stimuli under study due to dramatic advances in medical imaging technology. For example, little is known about how radiologists search through 3D volumes of image data because they simply did not exist when earlier eye tracking studies were performed. Moreover, improvements in the affordability and portability of modern eye trackers make other, new studies practical. Here, we review some uses of eye movements in the study of medical image perception with an emphasis on newer work. We ask how basic research on scene perception relates to studies of medical ‘scenes’ and we discuss how tracking experts’ eyes may provide useful insights for medical education and screening efficiency. Full article
(This article belongs to the Special Issue Eye Movements and Visual Cognition)
Show Figures

Figure 1

16 pages, 1123 KiB  
Article
How Does Spatial Attention Influence the Probability and Fidelity of Colour Perception?
by Austin J. Hurst, Michael A. Lawrence and Raymond M. Klein
Vision 2019, 3(2), 31; https://doi.org/10.3390/vision3020031 - 17 Jun 2019
Cited by 1 | Viewed by 3467
Abstract
Existing research has found that spatial attention alters how various stimulus properties are perceived (e.g., luminance, saturation), but few have explored whether it improves the accuracy of perception. To address this question, we performed two experiments using modified Posner cueing tasks, wherein participants [...] Read more.
Existing research has found that spatial attention alters how various stimulus properties are perceived (e.g., luminance, saturation), but few have explored whether it improves the accuracy of perception. To address this question, we performed two experiments using modified Posner cueing tasks, wherein participants made speeded detection responses to peripheral colour targets and then indicated their perceived colours on a colour wheel. In E1, cues were central and endogenous (i.e., prompted voluntary attention) and the interval between cues and targets (stimulus onset asynchrony, or SOA) was always 800 ms. In E2, cues were peripheral and exogenous (i.e., captured attention involuntarily) and the SOA varied between short (100 ms) and long (800 ms). A Bayesian mixed-model analysis was used to isolate the effects of attention on the probability and the fidelity of colour encoding. Both endogenous and short-SOA exogenous spatial cueing improved the probability of encoding the colour of targets. Improved fidelity of encoding was observed in the endogenous but not in the exogenous cueing paradigm. With exogenous cues, inhibition of return (IOR) was observed in both RT and probability at the long SOA. Overall, our findings reinforce the utility of continuous response variables in the research of attention. Full article
(This article belongs to the Special Issue Visual Orienting and Conscious Perception)
Show Figures

Graphical abstract

16 pages, 1035 KiB  
Article
Location-Specific Orientation Set Is Independent of the Horizontal Benefit with or Without Object Boundaries
by Zhe Chen, Ailsa Humphries and Kyle R. Cave
Vision 2019, 3(2), 30; https://doi.org/10.3390/vision3020030 - 14 Jun 2019
Cited by 3 | Viewed by 3110
Abstract
Chen and Cave (2019) showed that facilitation in visual comparison tasks that had previously been attributed to object-based attention could more directly be explained as facilitation in comparing two shapes that are configured horizontally rather than vertically. They also cued the orientation of [...] Read more.
Chen and Cave (2019) showed that facilitation in visual comparison tasks that had previously been attributed to object-based attention could more directly be explained as facilitation in comparing two shapes that are configured horizontally rather than vertically. They also cued the orientation of the upcoming stimulus configuration without cuing its location and found an asymmetry: the orientation cue only enhanced performance for vertical configurations. The current study replicates the horizontal benefit in visual comparison and again demonstrates that it is independent of surrounding object boundaries. In these experiments, the cue is informative about the location of the target configuration as well as its orientation, and it enhances performance for both horizontal and vertical configurations; there is no asymmetry. Either a long or a short cue can enhance performance when it is valid. Thus, Chen and Cave’s cuing asymmetry seems to reflect unusual aspects of an attentional set for orientation that must be established without knowing the upcoming stimulus location. Taken together, these studies show that a location-specific cue enhances comparison independently of the horizontal advantage, while a location-nonspecific cue produces a different type of attentional set that does not enhance comparison in horizontal configurations. Full article
(This article belongs to the Special Issue Visual Orienting and Conscious Perception)
Show Figures

Figure 1

19 pages, 4351 KiB  
Article
Contextually-Based Social Attention Diverges across Covert and Overt Measures
by Effie J. Pereira, Elina Birmingham and Jelena Ristic
Vision 2019, 3(2), 29; https://doi.org/10.3390/vision3020029 - 10 Jun 2019
Cited by 6 | Viewed by 4111
Abstract
Humans spontaneously attend to social cues like faces and eyes. However, recent data show that this behavior is significantly weakened when visual content, such as luminance and configuration of internal features, as well as visual context, such as background and facial expression, are [...] Read more.
Humans spontaneously attend to social cues like faces and eyes. However, recent data show that this behavior is significantly weakened when visual content, such as luminance and configuration of internal features, as well as visual context, such as background and facial expression, are controlled. Here, we investigated attentional biasing elicited in response to information presented within appropriate background contexts. Using a dot-probe task, participants were presented with a face–house cue pair, with a person sitting in a room and a house positioned within a picture hanging on a wall. A response target occurred at the previous location of the eyes, mouth, top of the house, or bottom of the house. Experiment 1 measured covert attention by assessing manual responses while participants maintained central fixation. Experiment 2 measured overt attention by assessing eye movements using an eye tracker. The data from both experiments indicated no evidence of spontaneous attentional biasing towards faces or facial features in manual responses; however, an infrequent, though reliable, overt bias towards the eyes of faces emerged. Together, these findings suggest that contextually-based social information does not determine spontaneous social attentional biasing in manual measures, although it may act to facilitate oculomotor behavior. Full article
(This article belongs to the Special Issue Visual Orienting and Conscious Perception)
Show Figures

Figure 1

19 pages, 2976 KiB  
Article
Object Properties Influence Visual Guidance of Motor Actions
by Sharon Scrafton, Matthew J. Stainer and Benjamin W. Tatler
Vision 2019, 3(2), 28; https://doi.org/10.3390/vision3020028 - 10 Jun 2019
Cited by 1 | Viewed by 3914
Abstract
The dynamic nature of the real world poses challenges for predicting where best to allocate gaze during object interactions. The same object may require different visual guidance depending on its current or upcoming state. Here, we explore how object properties (the material and [...] Read more.
The dynamic nature of the real world poses challenges for predicting where best to allocate gaze during object interactions. The same object may require different visual guidance depending on its current or upcoming state. Here, we explore how object properties (the material and shape of objects) and object state (whether it is full of liquid, or to be set down in a crowded location) influence visual supervision while setting objects down, which is an element of object interaction that has been relatively neglected in the literature. In a liquid pouring task, we asked participants to move empty glasses to a filling station; to leave them empty, half fill, or completely fill them with water; and then move them again to a tray. During the first putdown (when the glasses were all empty), visual guidance was determined only by the type of glass being set down—with more unwieldy champagne flutes being more likely to be guided than other types of glasses. However, when the glasses were then filled, glass type no longer mattered, with the material and fill level predicting whether the glasses were set down with visual supervision: full, glass material containers were more likely to be guided than empty, plastic ones. The key finding from this research is that the visual system responds flexibly to dynamic changes in object properties, likely based on predictions of risk associated with setting-down the object unsupervised by vision. The factors that govern these mechanisms can vary within the same object as it changes state. Full article
(This article belongs to the Special Issue Visual Control of Action)
Show Figures

Figure 1

18 pages, 3651 KiB  
Article
Distinguishing Hemodynamics from Function in the Human LGN Using a Temporal Response Model
by Kevin DeSimone and Keith A. Schneider
Vision 2019, 3(2), 27; https://doi.org/10.3390/vision3020027 - 07 Jun 2019
Cited by 4 | Viewed by 3725
Abstract
We developed a temporal population receptive field model to differentiate the neural and hemodynamic response functions (HRF) in the human lateral geniculate nucleus (LGN). The HRF in the human LGN is dominated by the richly vascularized hilum, a structure that serves as a [...] Read more.
We developed a temporal population receptive field model to differentiate the neural and hemodynamic response functions (HRF) in the human lateral geniculate nucleus (LGN). The HRF in the human LGN is dominated by the richly vascularized hilum, a structure that serves as a point of entry for blood vessels entering the LGN and supplying the substrates of central vision. The location of the hilum along the ventral surface of the LGN and the resulting gradient in the amplitude of the HRF across the extent of the LGN have made it difficult to segment the human LGN into its more interesting magnocellular and parvocellular regions that represent two distinct visual processing streams. Here, we show that an intrinsic clustering of the LGN responses to a variety of visual inputs reveals the hilum, and further, that this clustering is dominated by the amplitude of the HRF. We introduced a temporal population receptive field model that includes separate sustained and transient temporal impulse response functions that vary on a much short timescale than the HRF. When we account for the HRF amplitude, we demonstrate that this temporal response model is able to functionally segregate the residual responses according to their temporal properties. Full article
(This article belongs to the Special Issue Visual Perception and Its Neural Mechanisms)
Show Figures

Figure 1

11 pages, 5684 KiB  
Perspective
Ocular Equivocation: The Rivalry Between Wheatstone and Brewster
by Nicholas J. Wade
Vision 2019, 3(2), 26; https://doi.org/10.3390/vision3020026 - 06 Jun 2019
Cited by 7 | Viewed by 4082
Abstract
Ocular equivocation was the term given by Brewster in 1844 to binocular contour rivalry seen with Wheatstone’s stereoscope. The rivalries between Wheatstone and Brewster were personal as well as perceptual. In the 1830s, both Wheatstone and Brewster came to stereoscopic vision armed with [...] Read more.
Ocular equivocation was the term given by Brewster in 1844 to binocular contour rivalry seen with Wheatstone’s stereoscope. The rivalries between Wheatstone and Brewster were personal as well as perceptual. In the 1830s, both Wheatstone and Brewster came to stereoscopic vision armed with their individual histories of research on vision. Brewster was an authority on physical optics and had devised the kaleidoscope; Wheatstone extended his research on audition to render acoustic patterns visible with his kaleidophone or phonic kaleidoscope. Both had written on subjective visual phenomena, a topic upon which they first clashed at the inaugural meeting of the British Association for the Advancement of Science in 1832 (the year Wheatstone made the first stereoscopes). Wheatstone published his account of the mirror stereoscope in 1838; Brewster’s initial reception of it was glowing but he later questioned Wheatstone’s priority. They both described investigations of binocular contour rivalry but their interpretations diverged. As was the case for stereoscopic vision, Wheatstone argued for central processing whereas Brewster’s analysis was peripheral and based on visible direction. Brewster’s lenticular stereoscope and binocular camera were described in 1849. They later clashed over Brewster’s claim that the Chimenti drawings were made for a 16th-century stereoscope. The rivalry between Wheatstone and Brewster is illustrated with anaglyphs that can be viewed with red/cyan glasses and in Universal Freeview format; they include rivalling ‘perceptual portraits’ as well as examples of the stimuli used to study ocular equivocation. Full article
(This article belongs to the Special Issue Selected Papers from the Scottish Vision Group Meeting 2019)
Show Figures

Figure 1

16 pages, 4018 KiB  
Review
Recent Advances of Computerized Graphical Methods for the Detection and Progress Assessment of Visual Distortion Caused by Macular Disorders
by Navid Mohaghegh, Ebrahim Ghafar-Zadeh and Sebastian Magierowski
Vision 2019, 3(2), 25; https://doi.org/10.3390/vision3020025 - 05 Jun 2019
Cited by 3 | Viewed by 6188
Abstract
Recent advances of computerized graphical methods have received significant attention for detection and home monitoring of various visual distortions caused by macular disorders such as macular edema, central serous chorioretinopathy, and age-related macular degeneration. After a brief review of macular disorders and their [...] Read more.
Recent advances of computerized graphical methods have received significant attention for detection and home monitoring of various visual distortions caused by macular disorders such as macular edema, central serous chorioretinopathy, and age-related macular degeneration. After a brief review of macular disorders and their conventional diagnostic methods, this paper reviews such graphical interface methods including computerized Amsler Grid, Preferential Hyperacuity Perimeter, and Three-dimensional Computer-automated Threshold Amsler Grid. Thereafter, the challenges of these computerized methods for accurate and rapid detection of macular disorders are discussed. The early detection and progress assessment of macular disorders can significantly enhance the required clinical procedure for the diagnosis and treatment of macular disorders. Full article
Show Figures

Figure 1

15 pages, 741 KiB  
Review
Using Eye Movements to Understand how Security Screeners Search for Threats in X-Ray Baggage
by Nick Donnelly, Alex Muhl-Richardson, Hayward J. Godwin and Kyle R. Cave
Vision 2019, 3(2), 24; https://doi.org/10.3390/vision3020024 - 04 Jun 2019
Cited by 9 | Viewed by 11886
Abstract
There has been an increasing drive to understand failures in searches for weapons and explosives in X-ray baggage screening. Tracking eye movements during the search has produced new insights into the guidance of attention during the search, and the identification of targets once [...] Read more.
There has been an increasing drive to understand failures in searches for weapons and explosives in X-ray baggage screening. Tracking eye movements during the search has produced new insights into the guidance of attention during the search, and the identification of targets once they are fixated. Here, we review the eye-movement literature that has emerged on this front over the last fifteen years, including a discussion of the problems that real-world searchers face when trying to detect targets that could do serious harm to people and infrastructure. Full article
(This article belongs to the Special Issue Eye Movements and Visual Cognition)
Show Figures

Figure 1

13 pages, 703 KiB  
Review
The Changing Role of Phonology in Reading Development
by Sara V. Milledge and Hazel I. Blythe
Vision 2019, 3(2), 23; https://doi.org/10.3390/vision3020023 - 30 May 2019
Cited by 14 | Viewed by 6503
Abstract
Processing of both a word’s orthography (its printed form) and phonology (its associated speech sounds) are critical for lexical identification during reading, both in beginning and skilled readers. Theories of learning to read typically posit a developmental change, from early readers’ reliance on [...] Read more.
Processing of both a word’s orthography (its printed form) and phonology (its associated speech sounds) are critical for lexical identification during reading, both in beginning and skilled readers. Theories of learning to read typically posit a developmental change, from early readers’ reliance on phonology to more skilled readers’ development of direct orthographic-semantic links. Specifically, in becoming a skilled reader, the extent to which an individual processes phonology during lexical identification is thought to decrease. Recent data from eye movement research suggests, however, that the developmental change in phonological processing is somewhat more nuanced than this. Such studies show that phonology influences lexical identification in beginning and skilled readers in both typically and atypically developing populations. These data indicate, therefore, that the developmental change might better be characterised as a transition from overt decoding to abstract, covert recoding. We do not stop processing phonology as we become more skilled at reading; rather, the nature of that processing changes. Full article
(This article belongs to the Special Issue Eye Movements and Visual Cognition)
Show Figures

Figure 1

32 pages, 3517 KiB  
Review
What Can Eye Movements Tell Us about Subtle Cognitive Processing Differences in Autism?
by Philippa L Howard, Li Zhang and Valerie Benson
Vision 2019, 3(2), 22; https://doi.org/10.3390/vision3020022 - 24 May 2019
Cited by 16 | Viewed by 6990
Abstract
Autism spectrum disorder (ASD) is neurodevelopmental condition principally characterised by impairments in social interaction and communication, and repetitive behaviours and interests. This article reviews the eye movement studies designed to investigate the underlying sampling or processing differences that might account for the principal [...] Read more.
Autism spectrum disorder (ASD) is neurodevelopmental condition principally characterised by impairments in social interaction and communication, and repetitive behaviours and interests. This article reviews the eye movement studies designed to investigate the underlying sampling or processing differences that might account for the principal characteristics of autism. Following a brief summary of a previous review chapter by one of the authors of the current paper, a detailed review of eye movement studies investigating various aspects of processing in autism over the last decade will be presented. The literature will be organised into sections covering different cognitive components, including language and social communication and interaction studies. The aim of the review will be to show how eye movement studies provide a very useful on-line processing measure, allowing us to account for observed differences in behavioural data (accuracy and reaction times). The subtle processing differences that eye movement data reveal in both language and social processing have the potential to impact in the everyday communication domain in autism. Full article
(This article belongs to the Special Issue Eye Movements and Visual Cognition)
Show Figures

Figure 1

19 pages, 1484 KiB  
Review
Eye Movements Actively Reinstate Spatiotemporal Mnemonic Content
by Jordana S. Wynn, Kelly Shen and Jennifer D. Ryan
Vision 2019, 3(2), 21; https://doi.org/10.3390/vision3020021 - 18 May 2019
Cited by 51 | Viewed by 9322
Abstract
Eye movements support memory encoding by binding distinct elements of the visual world into coherent representations. However, the role of eye movements in memory retrieval is less clear. We propose that eye movements play a functional role in retrieval by reinstating the encoding [...] Read more.
Eye movements support memory encoding by binding distinct elements of the visual world into coherent representations. However, the role of eye movements in memory retrieval is less clear. We propose that eye movements play a functional role in retrieval by reinstating the encoding context. By overtly shifting attention in a manner that broadly recapitulates the spatial locations and temporal order of encoded content, eye movements facilitate access to, and reactivation of, associated details. Such mnemonic gaze reinstatement may be obligatorily recruited when task demands exceed cognitive resources, as is often observed in older adults. We review research linking gaze reinstatement to retrieval, describe the neural integration between the oculomotor and memory systems, and discuss implications for models of oculomotor control, memory, and aging. Full article
(This article belongs to the Special Issue Eye Movements and Visual Cognition)
Show Figures

Figure 1

17 pages, 3026 KiB  
Article
The Limitations of Reward Effects on Saccade Latencies: An Exploration of Task-Specificity and Strength
by Stephen Dunne, Amanda Ellison and Daniel T. Smith
Vision 2019, 3(2), 20; https://doi.org/10.3390/vision3020020 - 11 May 2019
Cited by 2 | Viewed by 3475
Abstract
Saccadic eye movements are simple, visually guided actions. Operant conditioning of specific saccade directions can reduce the latency of eye movements in the conditioned direction. However, it is not clear to what extent this learning transfers from the conditioned task to novel tasks. [...] Read more.
Saccadic eye movements are simple, visually guided actions. Operant conditioning of specific saccade directions can reduce the latency of eye movements in the conditioned direction. However, it is not clear to what extent this learning transfers from the conditioned task to novel tasks. The purpose of this study was to investigate whether the effects of operant conditioning of prosaccades to specific spatial locations would transfer to more complex oculomotor behaviours, specifically, prosaccades made in the presence of a distractor (Experiment 1) and antisaccades (Experiment 2). In part 1 of each experiment, participants were rewarded for making a saccade to one hemifield. In both experiments, the reward produced a significant facilitation of saccadic latency for prosaccades directed to the rewarded hemifield. In part 2, rewards were withdrawn, and the participant made a prosaccade to targets that were accompanied by a contralateral distractor (Experiment 1) or an antisaccade (Experiment 2). There were no hemifield-specific effects of the reward on saccade latency on the remote distractor effect or antisaccades, although the reward was associated with an overall slowing of saccade latency in Experiment 1. These data indicate that operant conditioning of saccadic eye movements does not transfer to similar but untrained tasks. We conclude that rewarding specific spatial locations is unlikely to induce long-term, systemic changes to the human oculomotor system. Full article
(This article belongs to the Special Issue Visual Control of Action)
Show Figures

Figure 1

10 pages, 2836 KiB  
Review
Meaning and Attentional Guidance in Scenes: A Review of the Meaning Map Approach
by John M. Henderson, Taylor R. Hayes, Candace E. Peacock and Gwendolyn Rehrig
Vision 2019, 3(2), 19; https://doi.org/10.3390/vision3020019 - 10 May 2019
Cited by 36 | Viewed by 5658
Abstract
Perception of a complex visual scene requires that important regions be prioritized and attentionally selected for processing. What is the basis for this selection? Although much research has focused on image salience as an important factor guiding attention, relatively little work has focused [...] Read more.
Perception of a complex visual scene requires that important regions be prioritized and attentionally selected for processing. What is the basis for this selection? Although much research has focused on image salience as an important factor guiding attention, relatively little work has focused on semantic salience. To address this imbalance, we have recently developed a new method for measuring, representing, and evaluating the role of meaning in scenes. In this method, the spatial distribution of semantic features in a scene is represented as a meaning map. Meaning maps are generated from crowd-sourced responses given by naïve subjects who rate the meaningfulness of a large number of scene patches drawn from each scene. Meaning maps are coded in the same format as traditional image saliency maps, and therefore both types of maps can be directly evaluated against each other and against maps of the spatial distribution of attention derived from viewers’ eye fixations. In this review we describe our work focusing on comparing the influences of meaning and image salience on attentional guidance in real-world scenes across a variety of viewing tasks that we have investigated, including memorization, aesthetic judgment, scene description, and saliency search and judgment. Overall, we have found that both meaning and salience predict the spatial distribution of attention in a scene, but that when the correlation between meaning and salience is statistically controlled, only meaning uniquely accounts for variance in attention. Full article
(This article belongs to the Special Issue Eye Movements and Visual Cognition)
Show Figures

Figure 1

5 pages, 3661 KiB  
Commentary
Bela Julesz in Depth
by Thomas V. Papathomas, Kazunori Morikawa and Nicholas Wade
Vision 2019, 3(2), 18; https://doi.org/10.3390/vision3020018 - 08 May 2019
Cited by 4 | Viewed by 4192
Abstract
A brief tribute to Bela Julesz (1928–2003) is made in words and images. In addition to a conventional stereophotographic portrait, his major contributions to vision research are commemorated by two ‘perceptual portraits’, which try to capture the spirit of his main accomplishments in [...] Read more.
A brief tribute to Bela Julesz (1928–2003) is made in words and images. In addition to a conventional stereophotographic portrait, his major contributions to vision research are commemorated by two ‘perceptual portraits’, which try to capture the spirit of his main accomplishments in stereopsis and the perception of texture. Full article
Show Figures

Figure 1

17 pages, 866 KiB  
Review
Associations and Dissociations between Oculomotor Readiness and Covert Attention
by Soazig Casteau and Daniel T. Smith
Vision 2019, 3(2), 17; https://doi.org/10.3390/vision3020017 - 07 May 2019
Cited by 17 | Viewed by 5110
Abstract
The idea that covert mental processes such as spatial attention are fundamentally dependent on systems that control overt movements of the eyes has had a profound influence on theoretical models of spatial attention. However, theories such as Klein’s Oculomotor Readiness Hypothesis (OMRH) and [...] Read more.
The idea that covert mental processes such as spatial attention are fundamentally dependent on systems that control overt movements of the eyes has had a profound influence on theoretical models of spatial attention. However, theories such as Klein’s Oculomotor Readiness Hypothesis (OMRH) and Rizzolatti’s Premotor Theory have not gone unchallenged. We previously argued that although OMRH/Premotor theory is inadequate to explain pre-saccadic attention and endogenous covert orienting, it may still be tenable as a theory of exogenous covert orienting. In this article we briefly reiterate the key lines of argument for and against OMRH/Premotor theory, then evaluate the Oculomotor Readiness account of Exogenous Orienting (OREO) with respect to more recent empirical data. These studies broadly confirm the importance of oculomotor preparation for covert, exogenous attention. We explain this relationship in terms of reciprocal links between parietal ‘priority maps’ and the midbrain oculomotor centres that translate priority-related activation into potential saccade endpoints. We conclude that the OMRH/Premotor theory hypothesis is false for covert, endogenous orienting but remains tenable as an explanation for covert, exogenous orienting. Full article
(This article belongs to the Special Issue Eye Movements and Visual Cognition)
Show Figures

Figure 1

13 pages, 1279 KiB  
Article
Attention Combines Similarly in Covert and Overt Conditions
by Christopher D. Blair and Jelena Ristic
Vision 2019, 3(2), 16; https://doi.org/10.3390/vision3020016 - 25 Apr 2019
Cited by 8 | Viewed by 4161
Abstract
Attention is classically classified according to mode of engagement into voluntary and reflexive, and type of operation into covert and overt. The first distinguishes whether attention is elicited intentionally or by unexpected events; the second, whether attention is directed with or without eye [...] Read more.
Attention is classically classified according to mode of engagement into voluntary and reflexive, and type of operation into covert and overt. The first distinguishes whether attention is elicited intentionally or by unexpected events; the second, whether attention is directed with or without eye movements. Recently, this taxonomy has been expanded to include automated orienting engaged by overlearned symbols and combined attention engaged by a combination of several modes of function. However, so far, combined effects were demonstrated in covert conditions only, and, thus, here we examined if attentional modes combined in overt responses as well. To do so, we elicited automated, voluntary, and combined orienting in covert, i.e., when participants responded manually and maintained central fixation, and overt cases, i.e., when they responded by looking. The data indicated typical effects for automated and voluntary conditions in both covert and overt data, with the magnitudes of the combined effect larger than the magnitude of each mode alone as well as their additive sum. No differences in the combined effects emerged across covert and overt conditions. As such, these results show that attentional systems combine similarly in covert and overt responses and highlight attention’s dynamic flexibility in facilitating human behavior. Full article
(This article belongs to the Special Issue Visual Orienting and Conscious Perception)
Show Figures

Figure 1

15 pages, 2641 KiB  
Article
Hands Ahead in Mind and Motion: Active Inference in Peripersonal Hand Space
by Johannes Lohmann, Anna Belardinelli and Martin V. Butz
Vision 2019, 3(2), 15; https://doi.org/10.3390/vision3020015 - 18 Apr 2019
Cited by 11 | Viewed by 3673
Abstract
According to theories of anticipatory behavior control, actions are initiated by predicting their sensory outcomes. From the perspective of event-predictive cognition and active inference, predictive processes activate currently desired events and event boundaries, as well as the expected sensorimotor mappings necessary to realize [...] Read more.
According to theories of anticipatory behavior control, actions are initiated by predicting their sensory outcomes. From the perspective of event-predictive cognition and active inference, predictive processes activate currently desired events and event boundaries, as well as the expected sensorimotor mappings necessary to realize them, dependent on the involved predicted uncertainties before actual motor control unfolds. Accordingly, we asked whether peripersonal hand space is remapped in an uncertainty anticipating manner while grasping and placing bottles in a virtual reality (VR) setup. To investigate, we combined the crossmodal congruency paradigm with virtual object interactions in two experiments. As expected, an anticipatory crossmodal congruency effect (aCCE) at the future finger position on the bottle was detected. Moreover, a manipulation of the visuo-motor mapping of the participants’ virtual hand while approaching the bottle selectively reduced the aCCE at movement onset. Our results support theories of event-predictive, anticipatory behavior control and active inference, showing that expected uncertainties in movement control indeed influence anticipatory stimulus processing. Full article
(This article belongs to the Special Issue Visual Control of Action)
Show Figures

Figure 1

15 pages, 1369 KiB  
Article
Dynamic Cancellation of Perceived Rotation from the Venetian Blind Effect
by Joshua J. Dobias and Wm Wren Stine
Vision 2019, 3(2), 14; https://doi.org/10.3390/vision3020014 - 03 Apr 2019
Viewed by 2649
Abstract
Geometric differences between the images seen by each eye enable the perception of depth. Additionally, depth is produced in the absence of geometric disparities with binocular disparities in either the average luminance or contrast, which is known as the Venetian blind effect. The [...] Read more.
Geometric differences between the images seen by each eye enable the perception of depth. Additionally, depth is produced in the absence of geometric disparities with binocular disparities in either the average luminance or contrast, which is known as the Venetian blind effect. The temporal dynamics of the Venetian blind effect are much slower (1.3 Hz) than those for geometric binocular disparities (4–5 Hz). Sine-wave modulations of luminance and contrast disparity, however, can be discriminated from square-wave modulations at 1 Hz, which suggests a non-linearity. To measure this non-linearity, a luminance or contrast disparity modulation was presented at a particular frequency and paired with a geometric disparity modulation that cancelled the perceived rotation induced by the luminance or contrast modulation. Phases between the luminance or contrast and the geometric modulation varied in 50 ms increments from −200 and 200 ms. When phases were aligned, observers perceived little or no rotation. When not aligned, a perceived rotation was induced by a contrast or luminance disparity that was then cancelled by the geometric disparity. This causes the perception of a slight jump. The Generalized Difference Model, which is linear in time, predicted a minimal probability in cases when luminance or contrast disparities occurred before the geometric disparities due to the slower dynamics of the Venetian blind effect. The Gated Generalized Difference Model, which is non-linear in time, predicted a minimal probability for offsets of 0 ms. Results followed the Gated model, which further suggests a non-linearity in time for the Venetian blind effect. Full article
(This article belongs to the Special Issue Visual Motion Processing)
Show Figures

Figure 1

16 pages, 3005 KiB  
Article
The A-Effect and Global Motion
by Pearl S. Guterman and Robert S. Allison
Vision 2019, 3(2), 13; https://doi.org/10.3390/vision3020013 - 28 Mar 2019
Viewed by 2839
Abstract
When the head is tilted, an objectively vertical line viewed in isolation is typically perceived as tilted. We explored whether this shift also occurs when viewing global motion displays perceived as either object-motion or self-motion. Observers stood and lay left side down while [...] Read more.
When the head is tilted, an objectively vertical line viewed in isolation is typically perceived as tilted. We explored whether this shift also occurs when viewing global motion displays perceived as either object-motion or self-motion. Observers stood and lay left side down while viewing (1) a static line, (2) a random-dot display of 2-D (planar) motion or (3) a random-dot display of 3-D (volumetric) global motion. On each trial, the line orientation or motion direction were tilted from the gravitational vertical and observers indicated whether the tilt was clockwise or counter-clockwise from the perceived vertical. Psychometric functions were fit to the data and shifts in the point of subjective verticality (PSV) were measured. When the whole body was tilted, the perceived tilt of both a static line and the direction of optic flow were biased in the direction of the body tilt, demonstrating the so-called A-effect. However, we found significantly larger shifts for the static line than volumetric global motion as well as larger shifts for volumetric displays than planar displays. The A-effect was larger when the motion was experienced as self-motion compared to when it was experienced as object-motion. Discrimination thresholds were also more precise in the self-motion compared to object-motion conditions. Different magnitude A-effects for the line and motion conditions—and for object and self-motion—may be due to differences in combining of idiotropic (body) and vestibular signals, particularly so in the case of vection which occurs despite visual-vestibular conflict. Full article
(This article belongs to the Special Issue Multisensory Modulation of Vision)
Show Figures

Figure 1

9 pages, 1372 KiB  
Review
A Review of Motion and Orientation Processing in Migraine
by Alex J. Shepherd
Vision 2019, 3(2), 12; https://doi.org/10.3390/vision3020012 - 27 Mar 2019
Cited by 4 | Viewed by 3079
Abstract
Visual tests can be used as noninvasive tools to test models of the pathophysiology underlying neurological conditions, such as migraine. They may also be used to track changes in performance that vary with the migraine cycle or can track the efficacy of prophylactic [...] Read more.
Visual tests can be used as noninvasive tools to test models of the pathophysiology underlying neurological conditions, such as migraine. They may also be used to track changes in performance that vary with the migraine cycle or can track the efficacy of prophylactic treatments. This article reviews the literature on performance differences on two visual tasks, global motion discrimination and orientation, which, of the many visual tasks that have been used to compare differences between migraine and control groups, have yielded the most consistent patterns of group differences. The implications for understanding the underlying pathophysiology in migraine are discussed, but the main focus is on bringing together disparate areas of research and suggesting those that can reveal practical uses of visual tests to treat and manage migraine. Full article
(This article belongs to the Special Issue Visual Motion Processing)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop