Next Article in Journal
Cross-Platform Usability Model Evaluation
Next Article in Special Issue
The Cost of Production in Elicitation Studies and the Legacy Bias-Consensus Trade off
Previous Article in Journal
Multimodal Mixed Reality Impact on a Hand Guiding Task with a Holographic Cobot
Previous Article in Special Issue
Virtual Reality Nature Exposure and Test Anxiety
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Two Decades of Touchable and Walkable Virtual Reality for Blind and Visually Impaired People: A High-Level Taxonomy

by
Julian Kreimeier
* and
Timo Götzelmann
*
Nuremberg Institute of Technology, D-90489 Nuremberg, Germany
*
Authors to whom correspondence should be addressed.
Multimodal Technol. Interact. 2020, 4(4), 79; https://doi.org/10.3390/mti4040079
Submission received: 30 August 2020 / Revised: 9 November 2020 / Accepted: 10 November 2020 / Published: 17 November 2020
(This article belongs to the Special Issue 3D Human–Computer Interaction)

Abstract

:
Although most readers associate the term virtual reality (VR) with visually appealing entertainment content, this technology also promises to be helpful to disadvantaged people like blind or visually impaired people. While overcoming physical objects’ and spaces’ limitations, virtual objects and environments that can be spatially explored have a particular benefit. To give readers a complete, clear and concise overview of current and past publications on touchable and walkable audio supplemented VR applications for blind and visually impaired users, this survey paper presents a high-level taxonomy to cluster the work done up to now from the perspective of technology, interaction and application. In this respect, we introduced a classification into small-, medium- and large-scale virtual environments to cluster and characterize related work. Our comprehensive table shows that especially grounded force feedback devices for haptic feedback (‘small scale’) were strongly researched in different applications scenarios and mainly from an exocentric perspective, but there are also increasingly physically (‘medium scale’) or avatar-walkable (‘large scale’) egocentric audio-haptic virtual environments. In this respect, novel and widespread interfaces such as smartphones or nowadays consumer grade VR components represent a promising potential for further improvements. Our survey paper provides a database on related work to foster the creation process of new ideas and approaches for both technical and methodological aspects.

Graphical Abstract

1. Introduction

Self-determined access to spatial information is much more difficult for blind and visually impaired people than for sighted people. In order to touch and understand unknown spatial objects or to get to know unknown buildings for route planning in advance, physical models like maps or miniature models must be made available or be specially produced. If, for example, the physical environment cannot be explored for the first time together with a sighted assistant, 2.5D tactile maps are oftentimes used, whose benefits can be enhanced by increased interaction possibilities [1,2], e.g., contextual speech output when touched. However, such physical models have limitations when it comes to production costs, time and a limited possibility of interaction, e.g., the limited resolution of a tactile map or the lifetime fixed scale of a 3D printed object. In addition, real environments and sighted assistants are not always freely available which impedes limit the independence blind and visually impaired people.
In order to overcome these limitations, among other things, research has been conducted on virtual reality (VR) in this context and significant progress has been achieved through further technical development. By means of VR, it is possible not only to see virtual objects and environments, but also to hear them (in terms of spatial and semantic information) and to actually grasp and feel them through haptic feedback devices. Virtual content can be basically defined, distributed, scaled and provided with interaction almost at will and in real time. This can provide much easier and more useful access to spatial information for blind and visually impaired people. For this purpose, it is difficult to oversee the large number of publications, which have different thematic and technical aspects when it comes to application, implementation and evaluation of walkable and touchable VR. Audio feedback certainly plays an important role in this context, but in the focus of this work, it will be considered as a supplementary modality providing spatial and semantic information for the exploration process.
Thus, the aim of this work was to give readers a complete, clear and concise overview of publications on the topic of touchable and walkable VR applications for blind and visually impaired users. We especially focus on classifying the very implementation (i.e., type of interaction) and evaluation in different application scenarios to break the state of the art down in table and to foster the creation process of new ideas and dissemination from both technical and methodological aspects in this regard.
In the following, we will briefly recapitulate both ‘VR’ in the context if this paper and how it can be useful to (blind) users. Afterwards, we will introduce and apply a high-level taxonomy. The results will then be presented in a concise table, thoroughly discussed and a conclusion and outlook for the future will be given.

1.1. Definition: Blind or Visually Impaired

According to the WHO (World Health Organization) World Report on Blindness [3], globally there are at least 2.2 billion people who are blind or have a vision impairment. There are several eye diseases causing different types of vision impairments. Additionally, the legal definitions of blindness greatly differ in many countries. Thus, the WHO classified the severity of vision impairment on the residual vision on the better eye when it is best corrected, e.g., by spectacles. Depending on the abilities of the user, there are individual requirements and possibilities towards the interface, e.g., there is a toolset to prepare visual VR content for different degrees of visual impairment [4]. It is undoubtedly important that the technical interface must be adapted to the user in the best possible way, e.g., to be able to make use of existing residual vision. In summary, there are different types and degrees of visual impairment ranging from a blurry, but complete image of the world towards a sharp, but in terms of field of view, reduced image, or even complete blindness [4]. However, in this survey paper, we will focus on technical possibilities of the VR interaction in particular and thus cannot explicitly distinguish between blindness and residual vision. One could also think of users who are in a non-visual virtual environment not because of a physical impairment, but because of an overload or unavailability of the visual modality. The latter scenario does represent a similar situation (as discussed in [5]), but the users’ task specific requirements and abilities of (physically) visually impaired users are oftentimes different from actually sighted users [6]. In order to be able to build upon previous work [7,8,9] and to be able to follow a consistent and precise focus in the following, we define the term visually impaired as users with impaired vision due to a physical impairment.

1.2. Consideration Regarding the Term ‘Virtual Reality’ within the Scope of this Paper

The visual, audio and haptic perception of real objects and environments are familiar to almost everyone from their everyday life. However, special software and hardware can provide stimuli to these modalities to give the user the impression of actually being in a different environment or to see, touch or hear an object that does not physically exist. The most well-known definition of Milgram and Kishino describes VR as an environment “in which the participant-observer is totally immersed in, and able to interact with, a completely synthetic world.” [10]. The basic interaction between human and machine is characterized by two interlocking control loops: one on the human’s and one on the computer’s side [11]. Both are equipped with (mechanical or biological) sensors and actuators, so that the computer simulation can give the user the visual, audio or haptic impression of seeing, hearing or touching virtual content. This principle can be adopted (with other sensors and actuators, respectively) to all sensory modalities. The impairment or even absence of vision drastically reduced the overall sensory bandwidth [12] and the combination of haptic and audio feedback becomes especially important to blind and visually impaired users. According to the distinction of the dimensionality of tactile information by Reichinger et al. [13], VR can convey 2D, 2.5D and 3D spatial information. This depends of course on the very application and the very technical implementation of the haptic interface with no physical representation of the spatial information. Depending on the very modality, there are also different perspectives: Audio feedback or spatial hearing is by nature egocentric and allows the simultaneous perception of direction and distance to multiple audio sources. Haptic feedback, on the other hand, consists of spatial information gathered by one or more individual collision points using tactile and kinesthetic cues. It can be generated by force feedback or vibration, and thus is neither ego- nor allocentric, but hand or body-centered. Depending on the conceptual design of the application and the technical implementation, specific characteristics are decisive. For example, haptic feedback can be implemented using grounded or wearable force feedback (see Figure 1). The formers are expensive and rare devices preventing the users from moving a hand inside a virtual object, which feels more intuitive. The latter are more lightweight and less expensive devices (e.g., data gloves), which limit only the fingers’ motions to simulate grasping an object, but do not prevent the users from accidently moving their wrist as a whole inside the virtual object. In terms of haptic feedback, the survey papers from Pachierotti et al. [14] and Seifi et al. [15] provide an comprehensive overview regarding haptic rendering devices. In terms of audio rendering, the work from Picinali and Katz [16,17] are exemplary pointers to many other publications which extend the scope of this paper.
In this context, unimodal concepts such as sonification (e.g., [18]) or haptification (e.g., [19]) providing only audio or haptic feedback must of course also be mentioned, but represent independent and separate fields of research. With regard to the above-outlined focus of this paper, however, we will concentrate in the following on the interactive spatial exploration aspects towards touch and walkable VR applications. Audio feedback certainly plays an important role in this context, but in the context of this work, it will be considered as a supplementary modality providing spatial and semantic information for the exploration process.

1.3. Integration and Differentiation from Existing Surveys

Research in VR for blind people are certainly not a new phenomenon, although technology and methodology are naturally evolving. There are already some works that have summarized and analyzed this development at its time [6,7,8,20,21,22,23,24,25,26]. The oldest survey from Vincent L’evesque in 2005 [21] gives an overview of the state of knowledge at that time on how blind people can be supported by means of haptics. This work presents a very good overview of the special needs and possible applications of how touchable and walkable virtual environments can support blind and visually impaired people, but is no longer up to date and nowadays key publications are missing. In 2007, Cobb and Sharkey [9] presented among other things a review of the previous decade of research and development of VR applications for blind and visually impaired people. This review gives an interesting and comprehensive look at the state-of-the-art at that time, but is not specifically focused on blind and visually impaired people and is also outdated nowadays. In their study of 2008, White et al. interviewed experts from the practical experience to find out how multimodal approaches to navigation can offer the greatest possible added value and provide important design guidelines and assistance for developers of such systems [6]. Continuing on from there, Ghali et al. gathered in 2012 a number of methods and approaches how VR can help (deaf and) blind people in several application scenarios like mobility, learning or games [8]. Later in 2014, Orly Lahav looked back the previous 14 years of “VEs that were developed to enable people who are blind to improve their O&M skills” (VE means Virtual Environment). In this extensive work, a differentiation between “systems that support the acquisition of a cognitive map” and “systems that are used as O&M rehabilitation aids” is made and the total existing works is analyzed in terms of descriptive information dimension, system dimension and research dimension. According to this, most publications deal with complex prototypes, whose handling is not trivial and only rarely Orientation and Mobility (O+M) experts were involved in development and evaluation. Ideally, a VR system would adapt to the users and could be used also in-situ and handheld. Similarly, the survey paper of Darin et al. [20] from 2015 proposes and discusses a classification of related work based on “Interface, Interaction, Cognition, and Evaluation”, they analyzed 21 VEs. While this overview is very vivid, it is not complete nowadays and does not consider multimodal haptic environments in general, which is why some key publications are missing in this context. This is certainly also due to the fact that the last two publications cited were published several years ago. Yasmin published in 2018 an extensive survey on “Virtual Reality and Assistive Technologies” [24] which is certainly closer to today. There is one chapter summarizing “Haptic VEs for visual impairments” while the whole work deals with supporting several impairments through the use of VR and does not cover all approaches that are relevant in the context of the present survey paper. Especially on cognitive learning and methodologic aspects, there are overview works by Mesquita et al. [22,23] from 2018 and 2019 which, however, pay less attention to technical aspects of the user interface and its implementation. The latest review by Façanha et al. in 2020 [26] focuses on virtual environments for orientation and mobility training purposes and provides an in-depth analysis regarding technical development as well as usability and cognitive evaluation aspects. However, due to their specific focus and emphasis, key publications regarding VR for blind and visually impaired users from 2019 and 2020 and outside this particular scope are missing. In addition, the authors did not introduce a novel taxonomy and considered a wider scope outside orientation and mobility training as in the work presented here.

2. Introduction of Proposed Taxonomy

2.1. Scientific Scope and Literature Search Methodology

The aim of this work was to give readers a complete, clear and concise overview of publications on the topic of conveying (generic) spatial information by means of touchable and walkable VR for blind and visually impaired users. We especially focus on classifying the very implementation of the spatial exploration interface and evaluation in different application scenarios. This thematic environment is in itself indeed a very broad field that can be analyzed in almost any depth, but with this paper, we wish to give readers the opportunity to get a lucid and holistic overview of existing technical implementation, evaluation-related and application-oriented aspects. In the following, we will specify content and interfaces, which were published so far, and classify related work by a literature review and a taxonomy. The results will provide precise pointers to further work in each synoptic cluster. For this purpose, a systematic literature research was applied using the publisher-independent and thorough search engine scholar.google.com as starting point. Here, we identified initial relevant literature using the initial keywords ‘virtual reality’, ‘virtual environment’, ‘blind’ and ‘visually impaired’ and iteratively completed (and verified) our database by a backwards snowball-search [27] from latest work to the earliest findable publication, see the complete workflow in Figure 2. Through the snowball search process, we have also become aware of other formulations of relevant work, which were most often unimodal interaction approaches like sonification or haptification. If appropriate in the context of this paper (i.e., the user can explore the virtual content interactively using haptic feedback or locomotion), they were also considered in the further process.
We also included other keyword search hits in between earliest and latest hits to expand and verify the collection. We included any scientific work that presents a technical implementation towards a touchable and walkable VR for blind and visually impaired people and optionally also includes an evaluation. We excluded any non-scientific, non-English written work outside the mentioned scope. To sharpen our contribution in this scope, we needed to exclude in-situ real-time navigation aids using augmented reality, as this should be better addressed in a separate article specializing in this topic (e.g., see [28]). However, we did use existing review papers’ references to re-check and expand our growing dataset.
Herewith, we compiled a comprehensive overview and derived a novel taxonomy, which can classify the relevant prior work into meaningful and lucid groups to ease future works’ search for related work. In the following, we explain these aspects in detail, discuss how this taxonomy is useful and how it can be used for future work.

2.2. Definition of the Feature ‘Scale’

One essential characteristic of a three-dimensional world is that one can move one’s person globally and the attached sensory system locally within it. This also applies to 1D or 2D information such as maps, planar surfaces or generic spatial information like shapes, graphs and geometries that are to be explored. This characteristic must be mapped in a virtual environment, for example to mentally integrate the shape of a virtual object by pointwise manual haptic exploration or to be able to explore a larger environment with means of locomotion. At this point, of course, separate research fields such as the non-visual exploration of information by sonification and haptication are also important. However, in the scope of this paper, unimodal audio or haptic feedback is understood as supplement modality when spatially exploring virtual content and is to be considered rather secondary. Inspired by the hapto–acoustic interaction metaphors of De Felice et al. [5], we consider the particular technical implementation of this interaction possibility to be an essential feature by which different levels of ‘scale’ can be defined, as we will explain in more detail in the following. An overview is listed in Table 1 and a schematic illustration can be seen in Figure 3.
Scale is thus to be interpreted as the size of the virtual content and not as the users’ input space. Depending on this very level of size (in the following referred to as scale), different interaction techniques are applied to implement the spatial exploration process (see Figure 3 and Table 1). Smaller (or miniaturized) content can be palpated within arm’s reach, while larger environments can be explored on foot within the physical limits of real spaces. Even larger environments must be explored by relative control of the avatar by keyboard, gamepad or walk-in-place approaches. In the context of our work, however, the term scale does not refer to the input space of the user, but to the size of the virtual content to be explored. In each scale, there are of course various ways to implement the audio feedback, but as this is a field on its own, it cannot be discussed further in depth in the context of this paper and for the sake of clarity. This field is explicitly addressed in dedicated papers like [17] and the following work.

2.2.1. Small Scale: Touching Virtual Objects within Arm’s Reach or Absolute Positioning of the Avatar

We propose to define the small scale to be the interaction of the user with an interface within hand reach, i.e., the user does not have to move their body in physical space. Palpating or interacting with a virtual object is achieved by the absolute positioning the haptic feedback point(s) of interaction of the user interface within hand reach, while the user stays in the same spot. A very common approach is using a grounded force feedback device from the Phantom series providing force feedback to the user in a very limited work space. This haptic feedback can certainly be supplemented by suitable semantic and/or spatial audio feedback, for example when touching a certain part or area of the virtual object that needs to be explored. In this context, the presence of force feedback is an advantage, which prevents the user from inadvertently reaching through the object and thus makes palpation more effective, but as a disadvantage this also entails a very limited working space and therefore virtual objects have to be scaled to fit in this area. Transferring this spatial information to reality (e.g., a miniature map to a real space) requires certain imagination and mental capacity while such devices are also mostly technically complex, expensive and thus rare.

2.2.2. Medium Scale: Physically Walking through VE, Restricted to Physically Available Space

The medium-scale level describes the interaction of users in VEs in larger environments that can be explored by physical walking. The position of the user is analyzed by means of appropriate tracking technology and the user is given the appropriate sensory feedback as if she or he would actually walk in this environment or around an object. Due to this freedom of physical movement, at least wearable haptic feedback must be used, grounded implementations are not possible without major, usually disproportionately large, technical effort. The freedom of movement is limited to the area of the tracking environment, which is why only sections of a larger environment can be displayed in real size. A common application is the audio-haptic exploration of a room or section of a virtual outdoor space with a non-grounded haptic feedback white cane simulation. This allows real spatial structures to be simulated much more immersively and comprehensively than as miniature models, but such approaches exclude mostly grounded haptic feedback and predetermine a limited physical area that can be used virtually.

2.2.3. Large Scale: Relative Positioning of the Avatar (‘walking’) by Controller Input, e.g., a Joystick

At large scale, users can discover VEs or generic objects that are much larger than the physical available space using an avatar. They perceive the VE as if they were the avatar in it and can control its motion by appropriate user input. The latter can be implemented either by digital or analogue movement with a game controller, the keyboard or a joystick providing passive haptic information. In addition, walk-in-place approaches, which mimic actually walking, were used. Thus, a theoretically arbitrarily large VE can be explored, but the users must be able to put themselves in their avatar as good as possible through a realistic audio-haptic simulation. A common example is the exploration of an unknown, large real space using an avatar, which is translated and rotated stepwise in a grid-like manner inside a purely audio VE.

2.3. Definition of the Feature ‘Exploration Interaction’

There are several possibilities for the technical implementation of interactive spatial exploration. In the following, we will show which these are and how they differ from each other. The semantic and conceptual borders between mere sensory input and output or devices might be blurred knowingly in order to better classify and distinguish the overall interaction concept.

2.3.1. Haptic Feedback

According to the definition from Pachierotti et al. [14], we distinguish grounded and wearable haptic feedback. A good example for grounded haptic feedback is the extensively researched series of Phantom devices. Here, a mechanical arm generates force feedback when the user’s finger collides with a virtual object and thus prevents the user from reaching into a virtual object. Such devices are usually very complex, still expensive today and rarely found outside laboratories [15]. Wearable devices are considerably handier and also more cost-effective, but do not prevent the user from reaching into a virtual object [36] in opposition to grounded force feedback. Thus, one has to palpate and mentally integrate the surface by monitoring the triggered haptic feedback in the absence of visual feedback. Most often, wearable haptic feedback is implemented as so called vibrotactile signals. Depending on the very haptic rendering algorithm, the onset of vibration indicates the collision with a virtual object, for example [37,38]. Combined with force feedback, vibrations can also convey different virtual texture [39,40] when using a virtual white cane.

2.3.2. Audio Feedback

Spatial hearing is an important source of sensory information for blind and visually impaired people, regarding both contextual information about the environment and spatial information in general. Therefore, almost all known VR applications for blind and visually impaired users integrate spatial audio rendering, for example to give users a realistic and useful acoustic impression of this environment or virtual content [16] or provide haptic-supplementing semantic information. For instance, this can be spatial hearing in combination with head tracking (e.g., the user can move his head to obtain sound sources and acoustic information [41]) or audio rendering processes which are independent of the user’s head rotation [42], but using stereo speakers or even speaker arrays. Stereo speakers are often used when, for example, controlling an avatar with the keyboard through a purely audio VE while the users hear through the speakers as if they were the controlled avatar inside the VE [43]. Of course, the intended (and technically possible) quality of this audio rendering also varies with the purpose and possibilities of the used or available hardware, most often the bare available computing power is decisive [16,32,39,44]. Thereby, (spatial) audio rendering itself is an interesting separate field of research on its own. Due to the audio feedback only as a spatial and semantic supplement focus of this work, we provide the reader with a few references to further, more in-depth work (e.g., [17]).

2.3.3. Locomotion in VE

A key aspect of the (mostly egocentric) exploration of large VEs is the movement of the users’ perspective. For example, when exploring an audio-haptic VE, which is an approximation of a real place, the users certainly want to be able to ‘walk’ around in it (like sighted users [45]). Depending on the available interface, they can move their avatar by means of absolute or relative positioning. The former is most often achieved in a small-scale setting with a grounded haptic feedback device, whose haptic feedback point of interaction represents the users’ avatar position in the VE and makes walls and obstacles perceivable. The latter is most often realized by relative movements of the users’ avatar position in the VE. Mostly, its movement can be controlled in a grid-like manner horizontally along the VE with a common computer keyboard or with continuous controller input from a joystick or a walk-in-place approach mimicking actual walking [37]. Collisions or environment related information is most often supplementary provided by audio feedback.

2.4. Definition of the Feature ‘Perspective’

Another main aspect when classifying VEs is the type of perspective the user has on or in it. Following the definition of egocentric and exocentric interaction metaphors by De Felice et al. [5], some approaches enable users to perceive the VE as if they were actually inside this environment (i.e., egocentric). One might think of being a doll in a dollhouse and perceiving the environment in terms of audio-haptic feedback. To reach and explore every point in this VE, the users need to be able to interactively move their position; locomotion is needed here (see the previous section in this regard) and audio feedback provides spatial and/or semantic information. With an exocentric view, however, the users can reach and explore the whole VE or virtual object without having to move their physical position by using a pointer, similar to pointwise scanning a dollhouse with a pen or 2D mathematical functions. Here, likewise, audio-haptic feedback is used to perceive spatial and semantic information. Thus, an exocentric view is strongly, but not necessarily connoted with a small- or medium-scale VE. In this context, a common example is the manual audio-haptic exploration of virtual objects like mathematical graphs or downscaled abstract maps of real spaces with grounded force feedback devices.

2.5. Definition of the Feature ‘Application Scenarios’

To achieve a distilled overview, it is necessary to cluster and summarize application scenarios and use cases of previous works. Therefore, we decided to name archetypical application scenarios to underline which generic spatial information was mainly used for each cluster. Generally speaking, VR applications for blind and visually impaired users can consist basically of any generic spatial information; for example, 3D haptic mathematical graphs in an education context (e.g., [19,30,46,47,48,49,50,51,52]) or virtual proxies of real spaces ([7,16,41,42,43,53,54,55,56,57,58,59,60,61,62,63]). Especially, the knowledge gain and transfer in the latter context when training orientation and mobility aspects in VEs have been extensively analyzed. A concise compilation can be found in [8,9].

2.6. Definition of the Feature ‘Evaluation and Metrics’

Whenever designing a user study, researchers face the problem on how to understand and measure what the user studies’ participants learn or think. Especially when spatial information needs to be conveyed by means of an audio-haptic VE, there are several approaches to answer this question and thereby evaluate the VR application in combination with the user interface. In the context of our taxonomy, this appears as a valid criterion to cluster existing work. The range of applied methods ranges from subjective questionnaires measuring the usability [64] or orientation and mobility skills [65], to objective measurements like the identification performance parameters of virtual objects [66] or the transfer of a trained cognitive map to a real space navigation task [56]. In addition, some user studies were evaluated by physically rebuilding the cognitive map of the VE using physical properties like LEGO [16]. In the context of this paper, the following table lists the most widely used approaches for each cluster in the last column. Some works employ sighted, but only blindfolded participants [66], while other evaluations also involve blind participants [67]. The trend to date, however, is more and more towards solely blind or visually impaired participants in the almost double-digit range fulfilling real world navigation tasks [56,68].

3. Application and Discussion of Taxonomy

When applying the previously presented classification criteria to the currently available and relevant related work in this field, certain clusters can be created, which we show and discuss in the following. This brief taxonomy can certainly not cover all publications in full depth, but aims to provide a vivid and high level overview of thematically appropriate connections and clusters. The chronological context and content of the survey presented in Table 2 will be explained in more detail below. At the first glance, one might notice that the amount of citations per cluster is not consistent. Our aim was to summarize as accurately as possible a reflection of the current status, which, by making meaningful distinctions in terms of content, is causing this imbalance. In the following, we briefly discuss the historical development towards the current state-of-the-art and report on contextual milestones and trends and that time. This short summary can only mention a few milestones that will desirably motivate readers to conduct focused in-depth research with these contextual links.

3.1. Technical and Content Development over the Last Two Decades

The development of touchable and walkable VR for blind and visually impaired people started in the literature in 1997, when Max and Gonzales [69] reported on “Navigable virtual Reality Worlds for Blind Users, using Spatialized Audio”. In the further course of the late 1990s and early 2000s, extensive research was conducted with grounded force feedback devices like the Phantom series in an exocentric small-scale context, e.g., making virtual charts and diagrams [31], but also simple 3D geometric objects [66] graspable. From today’s point of view, the technology of that time was very limited in terms of quality and quantity of haptic and audio feedback, which also narrowed the technical band width of haptic and spatial information. Especially interesting is the combination of a force feedback data glove and a grounded force feedback mechanical arm [70,71] or two mechanical arms providing force feedback [72] to improve the spatial perception, which lead to sophisticated hardware and software engineering approaches [73]. On the contrary, they were also very complex as well as expensive and therefore not widely disseminated. Subsequently, towards the end of the 2000s, many experiments were carried out with devices such as the Phantom and parameters such as haptic and audio [74] rendering were optimized so that more uses cases like gaming could be implemented [47]. Some researchers also began with analyzing the transfer of virtually trained knowledge to navigation in the corresponding real places [75] or used other devices like commercial off-the-shelf products [76,77]. Beginning with [78,79], the former was intensively reached by Orly Lahav and Jaime Sanchez throughout the mid-2010s (e.g., [34,58,62,68,80,81]). During this period, audio rendering also became much more powerful [82], making it possible to walk through and understand unknown in the absence of haptic feedback [16,55,63].
From the mid-2010s to today, complex and expensive simulation hardware like the Phantom was exchanged with commercial off-the-shelf product like the Nintendo Wii controller [58] or smartphones [43] to receive the users’ input and provide them with audio-haptic sensory feedback. In addition, appropriate VR hardware and software equipment was available by then and could be used directly for haptic and audio feedback [83,84] or to use smartphones for virtual explorations of VE [32,56]. There are even approaches to simulate echolocation in VR [85]. The latter can be found in the large-scale category, as users can use a controllable avatar to walk through much larger environments than the physically available space would allow [37]. The research trends in the remaining category medium scale continue to develop away from complex and expensive laboratory experiments [33] to relatively inexpensive, but nevertheless sophisticated audio-haptic approaches like a room scale walkable audio-haptic white cane simulations [39,40].

3.2. Analysis of the Current State of the Art

Considering Table 2, it is noticeable that in the cluster of small scale, a particularly large number of publications have an exocentric perspective. This is probably due to the fact that application scenarios in this context can be implemented particularly efficiently and thus with as little cognitive load as possible during operation. Mostly grounded force feedback is used here, which particularly supports the associated pointwise iterative exploration of (virtual) content. To be even able to capture small-scale spatial information with vibrotactile, non-grounded data gloves it appears reasonable to use a real plane (e.g., a table) as a reference. Otherwise, the cognitive load when palpating 3D information without grounded force feedback is very high. Most of the work with exocentric small-scale VEs is done with usual two-dimensional content such as graphs and diagrams, but also three-dimensional, simple geometric objects. The evaluation with users was mostly done by measuring how detailed the gained mental model was formed and how the users perceive the usability. With the other application scenarios like maps, games or proxies of real spaces similar evaluation methods were used, games were oftentimes tested in a feasibility study while the learned mental model of a map could be rebuild and checked with physical objects or navigation tasks in the real environment. With egocentric small-scale VEs, a similar context of evaluation possibilities applies. Medium-scale VEs are all egocentric simulations providing audio and wearable haptic feedback or just interactive audio feedback. In addition to wearing appropriate VR glasses and headphones in a tracked environment, smartphones can also take over this function and use a real empty space for a walkable virtual environment. Some works focus on the integration of haptic feedback to simulate a virtual white cane for blind people in this virtual room, most often a proxy of a real space. Compared to small and large scale, there is relatively little work done in this area, since such laboratory equipment is not very common and the functionality of smartphones is not yet well known or used. In the large-scale cluster, a relatively large number of publications are listed, these primarily contain the exploration of a purely audio VE by relative movement of the user’s avatar inside it. This procedure is certainly less common than the absolute positioning of an avatar in small- or medium-scale, but this approach can also represent VEs that are larger than the physically available space. It is also possible to simulate a virtual white cane with haptic feedback or to use a smartphone or computer to control the avatar’s movement. The virtual environment explored here can be a replica of a real environment or intentionally a game to specifically train orientation and mobility skills. This mental model or the improvement of skills can be measured subjectively and objectively through questionnaires or specific navigation tasks. In summary, Figure 4 provides an illustrative example for each scale.
However, despite all these exciting and promising developments, VR has (to the authors best knowledge) not yet arrived in everyday life of blind and visually impaired people. For sighted people there is currently a great progress in the VR and AR (Augmented Reality) field, but visual perception has completely different requirements and possibilities. Considering the mostly purely acoustic and/or haptic perception in VR, it can be quite a challenge to identify even simple 3D geometric objects without modern haptic feedback [38,83,84]. The technical development and thus also the possibilities for implementation improvements are constantly advancing, which together with the knowledge gained so far represents a great potential that needs to be exploited. Especially, smartphones [32,56] or nowadays commercial data gloves [84] hold big potential. Such virtual environments could also be used to practice the use of in-situ navigation aids in a repeatable and safe training environment [86,87,88]. A functional connection to existing in-situ navigation aids (e.g., [89,90]) would also be desirable and one could also think about realizing so far uncommon feature pairings, e.g., a medium-scale VE with an exocentric perspective. An application example might be a virtual map or a virtual object for whose complete exploration users must move in a trackable space, e.g., a true scale horse or a map that can be explored by actually walking in (respectively, over) it. These are certainly only a few of the many possibilities for improvement that arise in view of the continuing development of technology. Instead of individual and rare special laboratory prototypes (as has often been the case to date), a common software and hardware platform should be targeted and used in the long term, which could be useful for sighted, blind, visually impaired and people with other impairments.
VR is undoubtedly a very modern and high-tech way to convey spatial information, often it is simply more practical to use simpler and less complex approaches. The high-level taxonomy presented here cannot yet make a definite statement in this context, but is intended to support future work in this field by helping researchers to further develop their ideas as good as possible using existing knowledge.

3.3. The Taxonomy’s Value for Future Work

To conclude the development and application of this taxonomy, we will provide the readers with some inspiration how this taxonomy can contribute to future work in this field:
First, it would be very interesting to evaluate concrete application examples (e.g., Orientation and Mobility Training) not only within but also between different scales in the context of the further developing technical possibilities. Up to now, all developments and evaluations have taken place in a prototype setting and have rarely been evaluated in a realistic application scenario. Such a comparison between different scales (e.g., grasping a downscaled virtual map or virtually walking in it influencing the quality and quantity of the mental model and the cognitive load) could bring a noticeable information gain towards a real world application.
Second, existing paradigms and well-researched characteristics could be thought and optimized across scales to create novel approaches. For example, does the king Midas problem (i.e., double assignment of users haptic information for both navigation input and information output) also apply to non-small scale and/or none haptic modalities in virtual environments? Investigations in this area could help to make such VR systems more efficient and easier to use, which is especially important for blind and visually impaired people with a very limited (if any) visual sense.
Third, novel combinations of taxonomy features like users’ perspective, application scenario and evaluation metric could help to promote novel approaches. For example, in a medium- or large-scale setting, in what type of application and implementation would exocentric information be useful? Would it be useful if the users could interactively change their perspective and how could this change be presented to them at best?
These are certainly only a few, at first sight thought-provoking ideas, but they do show how the presented taxonomy provides a framework to foster the creation process of new ideas and approaches regarding both technical and methodological aspects.

3.4. Limitations

Table 2 and Figure 4 are a helpful support to grasp existing work, but certainly also a shortened and content-related optimized representation. Each individual citation represents a more or less extensive scientific contribution, so the mere quantity of citations in a cluster should not necessarily be understood as ‘scientific weight’. At first glance, this presentation (without the accompanying text in Section 3.1 and Section 3.2) does also not show any temporal links, which means that content-temporal conclusions are only possible to a limited extent. The authors worked to the best of their knowledge in order to minimize such effects, but due to the necessary content compression and classification, effects such as visualization-related seemingly weightings cannot be completely excluded. In addition, due to the necessary content focus of a taxonomy, not every aspect of a virtual environment or the user’s interaction with it can be fully covered. Beyond the aspects mentioned in this work, there are undoubtedly more and deeper classification and analysis possibilities. However, this work is intended to be a starting point to understand recent related work and refer to further literature (including previous survey papers).

4. Conclusions and Outlook

This survey paper took a look back over the past two decades and highlights the developments that have led to the current state-of-the-art when it comes to the application of VR for blind and visually impaired people. We proposed and applied a high-level taxonomy to cluster the work done up to now from the perspective of technology, interaction and application. Foremost, we introduced a classification into small-, medium- and large-scale virtual environments to characterize the interaction with and the content of the virtual environment. Our comprehensive table shows that especially grounded force feedback devices for haptic feedback were strongly researched in different applications scenarios and mainly from an exocentric perspective. However, such devices have a very limited interaction area, which can be expanded with medium-scale (i.e., walkable) virtual environments and completely overcome with large-scale environments. The latter are virtually walkable with an avatar that can be controlled by the user and these virtual environments are, for example, approximations of physical large environments that are unknown. The use of novel and widespread interfaces such as smartphones or nowadays commercial off-the-shelf VR components also represents a promising potential. This work contributes to this future work by summarizing previous knowledge and thus making it as comprehensible and usable as possible to foster future development in this field.

Author Contributions

Conceptualization, methodology, formal analysis, investigation, data curation, writing—original draft preparation, J.K.; writing—review and editing, supervision, T.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Brock, A.M.; Truillet, P.; Oriola, B.; Picard, D.; Jouffrais, C. Interactivity Improves Usability of Geographic Maps for Visually Impaired People. Hum. Comput. Interact. 2015, 30, 156–194. [Google Scholar] [CrossRef]
  2. Götzelmann, T.; Schneider, D. CapCodes: Capacitive 3D Printable Identification and On-screen Tracking for Tangible Interaction. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction, Gothenburg, Sweden, 23–27 October 2016; pp. 32:1–32:4. [Google Scholar]
  3. World Health Organization. World Report on Vision. Available online: https://www.who.int/publications-detail-redirect/world-report-on-vision (accessed on 29 August 2020).
  4. Zhao, Y.; Cutrell, E.; Holz, C.; Morris, M.R.; Ofek, E.; Wilson, A.D. SeeingVR: A Set of Tools to Make Virtual Reality More Accessible to People with Low Vision. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, UK, 4–9 May 2019; Association for Computing Machinery: Glasgow, Scotland, UK, 2019; pp. 1–14. [Google Scholar]
  5. De Felice, F.; Renna, F.; Attolico, G.; Distante, A. Hapto-acoustic interaction metaphors in 3d virtual environments for non-visual settings. Virtual Real. 2011, 21. [Google Scholar] [CrossRef] [Green Version]
  6. White, G.R.; Fitzpatrick, G.; McAllister, G. Toward Accessible 3D Virtual Environments for the Blind and Visually Impaired. In Proceedings of the 3rd International Conference on Digital Interactive Media in Entertainment and Arts, Athens, Greece, 10–12 September 2008; pp. 134–141. [Google Scholar]
  7. Lahav, O. Virtual reality as orientation and mobility aid for blind people. J. Assist. Technol. 2014, 8, 95–107. [Google Scholar] [CrossRef]
  8. Ghali, N.I.; Soluiman, O.; El-Bendary, N.; Nassef, T.M.; Ahmed, S.A.; Elbarawy, Y.M.; Hassanien, A.E. Virtual Reality Technology for Blind and Visual Impaired People: Reviews and Recent Advances. In Advances in Robotics and Virtual Reality; Intelligent Systems Reference Library; Gulrez, T., Hassanien, A.E., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 363–385. ISBN 978-3-642-23363-0. [Google Scholar]
  9. Cobb, S.; Sharkey, P.M. A Decade of Research and Development in Disability, Virtual Reality and Associated Technologies: Review of ICDVRAT 1996-2006. IJVR 2007, 6, 51–68. [Google Scholar]
  10. Milgram, P.; Kishino, F. A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. 1994, 77, 1321–1329. [Google Scholar]
  11. Srinivasan, M.A.; Basdogan, C. Haptics in virtual environments: Taxonomy, research status, and challenges. Comput. Graph. 1997, 21, 393–404. [Google Scholar] [CrossRef]
  12. Fritz, J.P.; Barner, K.E. Design of a haptic data visualization system for people with visual impairments. IEEE Trans. Rehabil. Eng. 1999, 7, 372–384. [Google Scholar] [CrossRef]
  13. Reichinger, A.; Neumüller, M.; Rist, F.; Maierhofer, S.; Purgathofer, W. Computer-Aided Design of Tactile Models. In Proceedings of the Computers Helping People with Special Needs, Linz, Austria, 11–13 July 2012; Miesenberger, K., Karshmer, A., Penaz, P., Zagler, W., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 497–504. [Google Scholar]
  14. Pacchierotti, C.; Sinclair, S.; Solazzi, M.; Frisoli, A.; Hayward, V.; Prattichizzo, D. Wearable Haptic Systems for the Fingertip and the Hand: Taxonomy, Review, and Perspectives. IEEE Trans. Haptics 2017, 10, 580–600. [Google Scholar] [CrossRef] [Green Version]
  15. Seifi, H.; Fazlollahi, F.; Oppermann, M.; Sastrillo, J.A.; Ip, J.; Agrawal, A.; Park, G.; Kuchenbecker, K.J.; MacLean, K.E. Haptipedia: Accelerating Haptic Device Discovery to Support Interaction & Engineering Design. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, UK, 4–9 May 2019; Association for Computing Machinery: Glasgow, Scotland, UK, 2019; pp. 1–12. [Google Scholar]
  16. Picinali, L.; Afonso, A.; Denis, M.; Katz, B.F.G. Exploration of architectural spaces by blind people using auditory virtual reality for the construction of spatial knowledge. Int. J. Hum. Comput. Stud. 2014, 72, 393–407. [Google Scholar] [CrossRef]
  17. Katz, B.F.; Picinali, L. Spatial audio applied to research with the blind. Adv. Sound Localization 2011, 225–250. [Google Scholar] [CrossRef] [Green Version]
  18. Bujacz, M.; Strumiłło, P. Sonification: Review of Auditory Display Solutions in Electronic Travel Aids for the Blind. Arch. Acoust. 2016, 41, 401–414. [Google Scholar] [CrossRef]
  19. Yu, W.; Ramloll, R.; Brewster, S.; Ridel, B. Exploring computer-generated line graphs through virtual touch. In Proceedings of the Sixth International Symposium on Signal Processing and its Applications (Cat.No.01EX467), Kuala Lumpur, Malaysia, 13–16 August 2001; Volume 1, pp. 72–75. [Google Scholar]
  20. Darin, T.; Sánchez, J.; Andrade, R. Dimensions for the design and evaluation of multimodal videogames for the cognition of people who are blind. In Proceedings of the 14th Brazilian Symposium on Human Factors in Computing Systems, Salvador, Brazil, 3–6 November 2015; pp. 1–4. [Google Scholar]
  21. Levesque, V. Blindness, Technology and Haptics. Cent. Intell. Mach. 2005, 28, 19–21. [Google Scholar]
  22. Mesquita, L.; Sánchez, J.; Andrade, R.M.C. Cognitive Impact Evaluation of Multimodal Interfaces for Blind People: Towards a Systematic Review. In Proceedings of the Universal Access in Human-Computer Interaction. Methods, Technologies, and Users, Las Vegas, NV, USA, 15–20 July 2018; pp. 365–384. [Google Scholar]
  23. Mesquita, L.; Sánchez, J. Quali-Quantitative Review of the Use of Multimodal Interfaces for Cognitive Enhancement in People Who Are Blind. In Proceedings of the Universal Access in Human-Computer Interaction. Multimodality and Assistive Environments, Orlando, FL, USA, 26–31 July 2019; pp. 262–281. [Google Scholar]
  24. Yasmin, S. Virtual Reality and Assistive Technologies: A Survey. Int. J. Virtual Real. 2018, 18, 30–57. [Google Scholar] [CrossRef] [Green Version]
  25. Yazzolino, L.A.; Connors, E.C.; Hirsch, G.V.; Sánchez, J.; Merabet, L.B. Developing Virtual Environments for Learning and Enhancing Skills for the Blind: Incorporating User-Centered and Neuroscience Based Approaches. In Virtual Reality for Psychological and Neurocognitive Interventions; Virtual Reality Technologies for Health and Clinical Applications; Rizzo, A., Bouchard, S., Eds.; Springer: New York, NY, USA, 2019; pp. 361–385. ISBN 978-1-4939-9482-3. [Google Scholar]
  26. Façanha, A.R.; Darin, T.; Viana, W.; Sánchez, J. O8M Indoor Virtual Environments for People Who Are Blind: A Systematic Literature Review. ACM Trans. Access. Comput. 2020, 13, 1–42. [Google Scholar] [CrossRef]
  27. Wohlin, C. Guidelines for snowballing in systematic literature studies and a replication in software engineering. In Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering—EASE’14, London, UK, 13–14 May 2014; pp. 1–10. [Google Scholar]
  28. Bhowmick, A.; Hazarika, S.M. An insight into assistive technology for the visually impaired and blind people: State-of-the-art and future trends. J. Multimodal User Interfaces 2017, 11, 149–172. [Google Scholar] [CrossRef]
  29. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Group, T.P. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Med. 2009, 6, e1000097. [Google Scholar] [CrossRef] [Green Version]
  30. Roberts, J.C.; Franklin, K.M.; Cullinane, J. Virtual haptic exploratory visualization of line graphs and charts. In Proceedings of the Stereoscopic Displays and Virtual Reality Systems IX, International Society for Optics and Photonics, San Jose, CA, USA, 19–25 January 2002; Volume 4660, pp. 401–410. [Google Scholar]
  31. Yu, W.; Brewster, S. Multimodal Virtual Reality Versus Printed Medium in Visualization for Blind People. In Proceedings of the Fifth International ACM Conference on Assistive Technologies, Edinburgh, Scotland, 8–10 July 2002; pp. 57–64. [Google Scholar]
  32. Thevin, L.; Briant, C.; Brock, A.M. X-Road: Virtual Reality Glasses for Orientation and Mobility Training of People with Visual Impairments. ACM Trans. Access. Comput. 2020, 13, 1–47. [Google Scholar] [CrossRef]
  33. Tzovaras, D.; Moustakas, K.; Nikolakis, G.; Strintzis, M.G. Interactive mixed reality white cane simulation for the training of the blind and the visually impaired. Pers. Ubiquit. Comput. 2009, 13, 51–58. [Google Scholar] [CrossRef]
  34. Lahav, O.; Mioduser, D. Construction of cognitive maps of unknown spaces using a multi-sensory virtual environment for people who are blind. Comput. Human Behavior 2008, 24, 1139–1155. [Google Scholar] [CrossRef]
  35. Sánchez, J.; Sáenz, M. Metro navigation for the blind. Comput. Educ. 2010, 55, 970–981. [Google Scholar] [CrossRef]
  36. Kreimeier, J.; Hammer, S.; Friedmann, D.; Karg, P.; Bühner, C.; Bankel, L.; Götzelmann, T. Evaluation of Different Types of Haptic Feedback Influencing the Task-based Presence and Performance in Virtual Reality. In Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments, Rhodes, Greece, 5–7 June 2019; pp. 289–298. [Google Scholar]
  37. Kreimeier, J.; Götzelmann, T. First Steps Towards Walk-In-Place Locomotion and Haptic Feedback in Virtual Reality for Visually Impaired. In Proceedings of the Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, UK, 4–9 May 2019; pp. LBW2214:1–LBW2214:6. [Google Scholar]
  38. Martínez, J.; García, A.; Oliver, M.; Molina, J.P.; González, P. Identifying Virtual 3D Geometric Shapes with a Vibrotactile Glove. IEEE Comput. Graph. Appl. 2016, 36, 42–51. [Google Scholar] [CrossRef] [PubMed]
  39. Siu, A.F.; Sinclair, M.; Kovacs, R.; Ofek, E.; Holz, C.; Cutrell, E. Virtual Reality Without Vision: A Haptic and Auditory White Cane to Navigate Complex Virtual Worlds. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–13. [Google Scholar]
  40. Zhao, Y.; Bennett, C.; Benko, H.; Cutrell, E.; Holz, C.; Morris, M.R.; Sinclair, M. Enabling People with Visual Impairments to Navigate Virtual Reality with a Haptic and Auditory Cane Simulation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–14. [Google Scholar]
  41. Torres-Gil, M.A.; Casanova-Gonzalez, O.; Gonzalez-Mora, J.L. Applications of Virtual Reality for Visually Impaired People. WSEAS Trans. Comp. 2010, 9, 184–193. [Google Scholar]
  42. Connors, E.C.; Chrastil, E.R.; Sánchez, J.; Merabet, L.B. Virtual environments for the transfer of navigation skills in the blind: A comparison of directed instruction vs. video game based learning approaches. Front. Hum. Neurosci 2014, 8. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Guerreiro, J.; Ahmetovic, D.; Kitani, K.M.; Asakawa, C. Virtual Navigation for Blind People: Building Sequential Representations of the Real-World. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, Baltimore, MD, USA, 29 October–1 November 2017; pp. 280–289. [Google Scholar]
  44. Chai, C.; Lau, B.T.; Pan, Z. Hungry Cat—A Serious Game for Conveying Spatial Information to the Visually Impaired. Multimodal Technol. Interact. 2019, 3, 12. [Google Scholar] [CrossRef] [Green Version]
  45. Boletsis, C. The New Era of Virtual Reality Locomotion: A Systematic Literature Review of Techniques and a Proposed Typology. Multimodal Technol. Interact. 2017, 1, 24. [Google Scholar] [CrossRef] [Green Version]
  46. Brewster, S. Visualization tools for blind people using multiple modalities. Disabil. Rehabil. 2002, 24, 613–621. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Magnusson, C.; Rassmus-Gröhn, K.; Sjöström, C.; Danielsson, H. Navigation and recognition in complex haptic virtual environments–reports from an extensive study with blind users. In Proceedings of the Eurohaptics, Edingburgh, UK, 8–10 July 2002; Volume 2002. [Google Scholar]
  48. Panëels, S.A.; Ritsos, P.D.; Rodgers, P.J.; Roberts, J.C. Prototyping 3D haptic data visualizations. Comput. Graph. 2013, 37, 179–192. [Google Scholar] [CrossRef]
  49. Scoy, F.L.V.; Kawai, T.; Darrah, M.; Rash, C. Haptic display of mathematical functions for teaching mathematics to students with vision disabilities: Design and proof of concept. In Haptic Human-Computer Interaction; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2001; pp. 31–40. ISBN 978-3-540-42356-0. [Google Scholar]
  50. Yu, W.; Cheung, K.; Brewster, S. Automatic online haptic graph construction. In Proceedings of the EuroHaptics, Edingburgh, UK, 8–10 July 2002; pp. 128–133. [Google Scholar]
  51. Yu, W.; Ramloll, R.; Brewster, S. In Haptic graphs for blind computer users. In Proceedings of the Haptic Human-Computer Interaction, Glasgow, Scotland, UK, 31 August–1 September 2000; Brewster, S., Murray-Smith, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2001; pp. 41–51. [Google Scholar]
  52. Yu, W.; Brewster, S. Evaluation of multimodal graphs for blind people. UAIS 2003, 2, 105–124. [Google Scholar] [CrossRef]
  53. Bowman, E.L.; Liu, L. Individuals with severely impaired vision can learn useful orientation and mobility skills in virtual streets and can use them to improve real street safety. PLoS ONE 2017, 12, e0176534. [Google Scholar] [CrossRef]
  54. Connors, E.; Chrastil, E.; Sanchez, J.; Merabet, L.B. Action video game play and transfer of navigation and spatial cognition skills in adolescents who are blind. Front. Hum. Neurosci. 2014, 8. [Google Scholar] [CrossRef] [Green Version]
  55. Connors, E.C.; Yazzolino, L.A.; Sánchez, J.; Merabet, L.B. Development of an Audio-based Virtual Gaming Environment to Assist with Navigation Skills in the Blind. J. Vis. Exp. 2013, e50272. [Google Scholar] [CrossRef] [PubMed]
  56. Guerreiro, J.; Sato, D.; Ahmetovic, D.; Ohn-Bar, E.; Kitani, K.M.; Asakawa, C. Virtual navigation for blind people: Transferring route knowledge to the real-World. Int. J. Hum. Comput. Stud. 2020, 135, 102369. [Google Scholar] [CrossRef]
  57. Lahav, O.; Schloerb, D.W.; Kumar, S.; Srinivasan, M.A. A Virtual Map to Support People Who are Blind in Navigation through Real Spaces. J. Spec. Educ. Technol. 2011, 26, 41–57. [Google Scholar] [CrossRef]
  58. Lahav, O.; Gedalevitz, H.; Battersby, S.; Brown, D.; Evett, L.; Merritt, P. Virtual environment navigation with look-around mode to explore new real spaces by people who are blind. Disabil. Rehabil. 2018, 40, 1072–1084. [Google Scholar] [CrossRef] [Green Version]
  59. Merabet, L.B.; Connors, E.C.; Halko, M.A.; Sánchez, J. Teaching the Blind to Find Their Way by Playing Video Games. PLoS ONE 2012, 7. [Google Scholar] [CrossRef] [Green Version]
  60. Merabet, L.B.; Sánchez, J. Development of an Audio-Haptic Virtual Interface for Navigation of Large-Scale Environments for People Who Are Blind. In Proceedings of the Universal Access in Human-Computer Interaction. Users and Context Diversity, Toronto, ON, Canada, 17–22 July 2016; Antona, M., Stephanidis, C., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 595–606. [Google Scholar]
  61. Lahav, O.; Mioduser, D. Blind persons’ acquisition of spatial cognitive mapping and orientation skills supported by virtual environment. Int. J. Disabil. Hum. Dev. 2005, 4, 231–238. [Google Scholar] [CrossRef]
  62. Sánchez, J.; Sáenz, M.; Pascual-Leone, A.; Merabet, L. Enhancing navigation skills through audio gaming. In Proceedings of the CHI ’10 Extended Abstracts on Human Factors in Computing Systems, Atlanta, GA, USA, 10–15 April 2010; pp. 3991–3996. [Google Scholar]
  63. Seki, Y.; Sato, T. A Training System of Orientation and Mobility for Blind People Using Acoustic Virtual Reality. IEEE Trans. Neural Syst. Rehabil. Eng. 2011, 19, 95–104. [Google Scholar] [CrossRef]
  64. D’Atri, E.; Medaglia, C.M.; Serbanati, A.; Ceipidor, U.B.; Panizzi, E.; D’Atri, A. A system to aid blind people in the mobility: A usability test and its results. In Proceedings of the Second International Conference on Systems (ICONS’07), Martinique, France, 22–28 April 2007; p. 35. [Google Scholar]
  65. Sánchez, J.; Espinoza, M.; de Borba Campos, M.; Merabet, L.B. Enhancing orientation and mobility skills in learners who are blind through video gaming. In Proceedings of the 9th ACM Conference on Creativity & Cognition, Sydney, Australia, 17–20 June 2013; pp. 353–356. [Google Scholar]
  66. Jansson, G.; Billberger, K. The PHANToM used without visual guidance. In Proceedings of the First PHANToM Users Research Symposium (PURS99), Heidelberg, Germany, 21–22 May 1999; pp. 21–22. [Google Scholar]
  67. Use of a Haptic Device by Blind and Sighted People: Perception of Virtual Textures and Objects. Available online: https://uhra.herts.ac.uk/bitstream/handle/2299/6099/903262.pdf?sequence=1&isAllowed=y (accessed on 11 November 2020).
  68. Lahav, O.; Schloerb, D.W.; Srinivasan, M.A. Rehabilitation program integrating virtual environment to improve orientation and mobility skills for people who are blind. Comput. Educ. 2015, 80, 1–14. [Google Scholar] [CrossRef] [Green Version]
  69. Max, M.L.; Gonzalez, J.R. Blind persons navigate in virtual reality (VR); hearing and feeling communicates “reality”. Stud. Health Technol. Inform. 1997, 39, 54–59. [Google Scholar]
  70. Jansson, G.; Bergamasco, M.; Frisoli, A. A new option for the visually impaired to experience 3D art at museums: Manual exploration of virtual copies. Vis. Impair. Res. 2003, 5, 1–12. [Google Scholar] [CrossRef]
  71. Tzovaras, D.; Nikolakis, G.; Fergadis, G.; Malasiotis, S.; Stavrakis, M. Design and implementation of haptic virtual environments for the training of the visually impaired. IEEE Trans. Neural Syst. Rehabil. Eng. 2004, 12, 266–278. [Google Scholar] [CrossRef]
  72. Iglesias, R.; Casado, S.; Gutierrez, T.; Barbero, J.I.; Avizzano, C.A.; Marcheschi, S.; Bergamasco, M. Computer graphics access for blind people through a haptic and audio virtual environment. In Proceedings of the Second International Conference on Creating, Connecting and Collaborating through Computing, Kyoto, Japan, 30 January 2004; pp. 13–18. [Google Scholar]
  73. Lecuyer, A.; Mobuchon, P.; Megard, C.; Perret, J.; Andriot, C.; Colinot, J.P. HOMERE: A multimodal system for visually impaired people to explore virtual environments. In Proceedings of the IEEE Virtual Reality, Los Angeles, CA, USA, 22–26 March 2003; pp. 251–258. [Google Scholar]
  74. Ohuchi, M.; Iwaya, Y.; Suzuki, Y. Cognitive-Map Forming of the Blind in Virtual Sound Environment; Georgia Institute of Technology: Atlanta, GA, USA, 2006. [Google Scholar]
  75. Pokluda, L.; Sochor, J. Spatial haptic orientation for visually impaired people. EG 2003, 3, 29–34. [Google Scholar]
  76. Evett, L.; Brown, D.; Battersby, S.; Ridley, A.; Smith, P. Accessible virtual environments for people who are blind-creating an intelligent virtual cane using the Nintendo Wii controller. In Proceedings of the 7th International Conference on Virtual Rehabilitation (ICVDRAT), Maia & Porto, Portugal, 8–11 September 2008; pp. 271–278. [Google Scholar]
  77. Parente, P.; Bishop, G. BATS: The blind audio tactile mapping system. In Proceedings of the ACM Southeast regional conference, Savannah, GA, USA, 7–8 March 2003; pp. 132–137. [Google Scholar]
  78. Lahav, O.; Mioduser, D. Multisensory virtual environment for supporting blind persons’ acquisition of spatial cognitive mapping—A case study. In Proceedings of the EdMedia+ Innovate Learning, Norfolk, VA, USA, 2001; Association for the Advancement of Computing in Education (AACE): Waynesville, NC, USA, 2001; pp. 1046–1051. [Google Scholar]
  79. Sánchez, J.; Lumbreras, M. Usability and cognitive impact of the interaction with 3D virtual interactive acoustic environments by blind children. In Proceedings of the 3rd International Conference on Disability, Virtual Reality and Associated Technologies, Alghero, Sardinia, Italy, 23–25 September 2000; pp. 67–73. [Google Scholar]
  80. Sánchez, J. Development of Navigation Skills Through Audio Haptic Videogaming in Learners Who are Blind. Procedia Comput. Sci. 2012, 14, 102–110. [Google Scholar] [CrossRef] [Green Version]
  81. Sánchez, J.; Sáenz, M. Three-Dimensional Virtual Environments for Blind Children. CyberPsychol. Behav. 2006, 9, 200–206. [Google Scholar] [CrossRef] [PubMed]
  82. Kurniawan, S.; Sporka, A.; Nemec, V.; Slavik, P. Design and user evaluation of a spatial audio system for blind users. In Proceedings of the 5th International Conference on Disability, Virtual Reality and Associated Technologies, Oxford, UK, 20–22 September 2004; pp. 175–182. [Google Scholar]
  83. Kreimeier, J.; Götzelmann, T. FeelVR: Haptic Exploration of Virtual Objects. In Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference, Corfu, Greece, 26–29 June 2018; pp. 122–125. [Google Scholar]
  84. Kreimeier, J.; Karg, P.; Götzelmann, T. Tabletop Virtual Haptics: Feasibility Study for the Exploration of 2.5D Virtual Objects by Blind and Visually Impaired with Consumer Data Gloves. In Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments, Corfu, Greece, 30 June–3 July 2020; Article 30. pp. 1–10. [Google Scholar]
  85. Andrade, R.; Baker, S.; Waycott, J.; Vetere, F. Echo-house: Exploring a virtual environment by using echolocation. In Proceedings of the 30th Australian Conference on Computer-Human Interaction, Melbourne, Australia, 4–7 December 2018; pp. 278–289. [Google Scholar]
  86. Ahlmark, D.I. Haptic Navigation Aids for the Visually Impaired. Ph.D. Thesis, Luleå Tekniska Universitet, Lulea, Sweden, 2016. [Google Scholar]
  87. González-Mora, J.L. VASIII: Development of an interactive device based on virtual acoustic reality oriented to blind rehabilitation. Jorn. Seguim. Proy. Tecnol. Inf. 2003, 2, 2001–3976. [Google Scholar]
  88. Moldoveanu, A.D.B.; Ivascu, S.; Stanica, I.; Dascalu, M.-I.; Lupu, R.; Ivanica, G.; Balan, O.; Caraiman, S.; Ungureanu, F.; Moldoveanu, F.; et al. Mastering an advanced sensory substitution device for visually impaired through innovative virtual training. In Proceedings of the 2017 IEEE 7th International Conference on Consumer Electronics—Berlin (ICCE-Berlin), Berlin, Germany, 3–6 September 2017; pp. 120–125. [Google Scholar]
  89. Yatani, K.; Banovic, N.; Truong, K. SpaceSense: Representing Geographical Information to Visually Impaired People Using Spatial Tactile Feedback. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; pp. 415–424. [Google Scholar]
  90. BlindSquare. Available online: https://www.blindsquare.com/de/ (accessed on 30 April 2020).
  91. Rodríguez, A.; Boada, I.; Sbert, M. An Arduino-based device for visually impaired people to play videogames. Multimed Tools Appl. 2018, 77, 19591–19613. [Google Scholar] [CrossRef]
  92. Sánchez, J.; Espinoza, M. Audio haptic videogaming for navigation skills in learners who are blind. In Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility, Dundee, Scotland, UK, 24–26 October 2011; pp. 227–228. [Google Scholar]
  93. Sánchez, J.; Mascaró, J. Audiopolis, Navigation through a Virtual City Using Audio and Haptic Interfaces for People Who Are Blind. In Proceedings of the Universal Access in Human-Computer Interaction, Orlando, FL, USA, 9–14 July 2011; Stephanidis, C., Ed.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 362–371. [Google Scholar]
  94. Huang, Y.Y. Exploration in 3d Virtual Worlds with Haptic-Audio Support for Nonvisual Spatial Recognition. In Proceedings of the HCIS 2010: Human-Computer Interaction, Brisbane, Australia, 20–23 September 2010; pp. 269–272. [Google Scholar]
  95. Colwell, C.; Petrie, H.; Kornbrot, D.; Hardwick, A.; Furner, S. Haptic Virtual Reality for Blind Computer Users. In Proceedings of the Third International ACM Conference on Assistive Technologies, Marina del Rey, CA, USA, 15–17 April 1998; pp. 92–99. [Google Scholar]
  96. Hardwick, A.; Furner, S.; Rush, J. Tactile access for blind people to virtual reality on the World Wide Web. In Proceedings of the IEE Colloquium on Developments in Tactile Displays (Digest No. 1997/012), London, UK, 21 January 1997; pp. 9/1–9/3. [Google Scholar]
  97. Hardwick, A.; Furner, S.; Rush, J. Tactile display of virtual reality from the World Wide Web—A potential access method for blind people. Displays 1998, 18, 153–161. [Google Scholar] [CrossRef]
  98. Nikolakis, G.; Tzovaras, D.; Moustakidis, S.; Strintzis, M.G. Cybergrasp and phantom integration: Enhanced haptic access for visually impaired users. In Proceedings of the 9th Conference Speech and Computer, Saint-Petersburg, Russia, 20–22 September 2004; pp. 507–513. [Google Scholar]
  99. Penn, P.; Petrie, H.; Colwell, C.; Kornbrot, D.; Furner, S.; Hardwick, A. The Perception of Texture, Object Size and Angularity by Touch in Virtual Environments with Two Haptic Devices; University of Glasgow: Glasgow, Scotland, UK, 2000. [Google Scholar]
  100. Nam, C.S.; Whang, M.; Liu, S.; Moore, M. Wayfinding of Users with Visual Impairments in Haptically Enhanced Virtual Environments. Int. J. Hum. Comput. Interact. 2015, 31, 295–306. [Google Scholar] [CrossRef]
  101. Todd, C.; Mallya, S.; Majeed, S.; Rojas, J.; Naylor, K. VirtuNav: A Virtual Reality indoor navigation simulator with haptic and audio feedback for the visually impaired. In Proceedings of the 2014 IEEE Symposium on Computational Intelligence in Robotic Rehabilitation and Assistive Technologies (CIR2AT), Orlando, FL, USA, 9–12 December 2014; pp. 1–8. [Google Scholar]
  102. Yu, J.; Habel, C. A Haptic-Audio Interface for Acquiring Spatial Knowledge about Apartments. In Proceedings of the Haptic and Audio Interaction Design, Lund, Sweden, 23–24 August 2012; Magnusson, C., Szymczak, D., Brewster, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 21–30. [Google Scholar]
  103. Schneider, O.; Shigeyama, J.; Kovacs, R.; Roumen, T.J.; Marwecki, S.; Boeckhoff, N.; Gloeckner, D.A.; Bounama, J.; Baudisch, P. DualPanto: A Haptic Device that Enables Blind Users to Continuously Interact with Virtual Worlds. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology—UIST’18, Berlin, Germany, 14–17 October 2018; pp. 877–887. [Google Scholar]
  104. Sjostrom, C. Designing haptic computer interfaces for blind people. In Proceedings of the Sixth International Symposium on Signal Processing and its Applications (Cat.No.01EX467), Kuala Lumpur, Malaysia, 13–16 August 2001; Volume 1, pp. 68–71. [Google Scholar]
  105. Sjöström, C. The IT Potential of Haptics. Licentiate Thesis, Lund University, Lund, Sweden, 1999. [Google Scholar]
  106. Sjöström, C.; Rassmus-Gröhn, K. The sense of touch provides new computer interaction techniques for disabled people. Technol. Disabil. 1999, 10, 45–52. [Google Scholar] [CrossRef] [Green Version]
  107. Wu, J.; Song, A.; Zou, C. A novel haptic texture display based on image processing. In Proceedings of the 2007 IEEE International Conference on Robotics and Biomimetics (ROBIO), Sanya, China, 15–18 December 2007; pp. 1315–1320. [Google Scholar]
  108. Panëels, S.A.; Roberts, J.C.; Rodgers, P.J. Haptic Interaction Techniques for Exploring Chart Data. In Proceedings of the Haptic and Audio Interaction Design, Dresden, Germany, 10–11 September 2009; Altinsoy, M.E., Jekosch, U., Brewster, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 31–40. [Google Scholar]
  109. Yu, W.; Brewster, S. Comparing two haptic interfaces for multimodal graph rendering. In Proceedings of the 10th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, Orlando, FL, USA, 24–25 March 2002; pp. 3–9. [Google Scholar]
  110. Yasmin, S.; Panchanathan, S. Haptic mirror: A platform for active exploration of facial expressions and social interaction by individuals who are blind. In Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry, Zhuhai, China, 3–4 December 2016; Volume 1, pp. 319–329. [Google Scholar]
  111. Brayda, L.; Campus, C.; Gori, M. Predicting Successful Tactile Mapping of Virtual Objects. IEEE Trans. Haptics 2013, 6, 473–483. [Google Scholar] [CrossRef]
  112. Magnuson, C.; Rassmus-Gröhn, K. Non-visual zoom and scrolling operations in a virtual haptic environment. In Proceedings of the Eurohaptics, Dublin, Ireland, 6–9 July 2003. [Google Scholar]
  113. Moustakas, K.; Nikolakis, G.; Kostopoulos, K.; Tzovaras, D.; Strintzis, M.G. Haptic Rendering of Visual Data for the Visually Impaired. IEEE Multimed. 2007, 14, 62–72. [Google Scholar] [CrossRef]
  114. Palieri, M.; Guaragnella, C.; Attolico, G. Omero 2.0. In Proceedings of the Augmented Reality, Virtual Reality, and Computer Graphics, Otranto, Italy, 24–27 June 2018; De Paolis, L.T., Bourdot, P., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 21–34. [Google Scholar]
  115. Schneider, J.; Strothotte, T. Constructive Exploration of Spatial Information by Blind Users. In Proceedings of the Fourth International ACM Conference on Assistive Technologies, Arlington, VA, USA, 13–15 November 2000; pp. 188–192. [Google Scholar]
  116. Kostopoulos, K.; Moustakas, K.; Tzovaras, D.; Nikolakis, G.; Thillou, C.; Gosselin, B. Haptic access to conventional 2D maps for the visually impaired. J. Multimodal User Interfaces 2007, 1, 13–19. [Google Scholar] [CrossRef]
  117. Lahav, O.; Schloerb, D.W.; Srinivasan, M.A. Newly blind persons using virtual environment system in a traditional orientation and mobility rehabilitation program: A case study. Disabil. Rehabil. Assist. Technol. 2012, 7, 420–435. [Google Scholar] [CrossRef] [PubMed]
  118. Lahav, O.; Mioduser, D. A blind person’s cognitive mapping of new spaces using a haptic virtual environment. J. Res. Spec. Educ. Needs 2003, 3, 172–177. [Google Scholar] [CrossRef]
  119. Lahav, O.; Mioduser, D. Exploration of Unknown Spaces by People Who are Blind Using a Multi-Sensory Virtual Environment. J. Spec. Educ. Technol. 2004, 19, 15–23. [Google Scholar] [CrossRef]
  120. Lahav, O.; Mioduser, D. Haptic-feedback support for cognitive mapping of unknown spaces by people who are blind. Int. J. Hum. Comput. Stud. 2008, 66, 23–35. [Google Scholar] [CrossRef]
  121. Schloerb, D.W.; Lahav, O.; Desloge, J.G.; Srinivasan, M.A. BlindAid: Virtual environment system for self-reliant trip planning and orientation and mobility training. In Proceedings of the 2010 IEEE Haptics Symposium, Waltham, MA, USA, 25–26 March 2010; pp. 363–370. [Google Scholar]
  122. Simonnet, M.; Vieilledent, S.; Jacobson, D.R.; Tisseau, J. The assessment of non visual maritime cognitive maps of a blind sailor: A case study. J. Maps 2010, 6, 289–301. [Google Scholar] [CrossRef] [Green Version]
  123. Tatsumi, H.; Murai, Y.; Sekita, I.; Tokumasu, S.; Miyakawa, M. Cane Walk in the Virtual Reality Space Using Virtual Haptic Sensing: Toward Developing Haptic VR Technologies for the Visually Impaired. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, Kowloon, China, 9–12 October 2015; pp. 2360–2365. [Google Scholar]
  124. González-Mora, J.L.; Rodríguez-Hernández, A.; Rodríguez-Ramos, L.F.; Díaz-Saco, L.; Sosa, N. Development of a new space perception system for blind people, based on the creation of a virtual acoustic space. In Proceedings of the Engineering Applications of Bio-Inspired Artificial Neural Networks, Alicante, Spain, 2–4 June 199; Mira, J., Sánchez-Andrés, J.V., Eds.; Springer: Berlin, Germany, 1999; pp. 321–330. [Google Scholar]
  125. Kunz, A.; Miesenberger, K.; Zeng, L.; Weber, G. Virtual Navigation Environment for Blind and Low Vision People. In Proceedings of the Computers Helping People with Special Needs Austria, Linz, Austria, 11–13 July 2018; pp. 114–122. [Google Scholar]
  126. Rocha Façanha, A.; Viana, W.; Sánchez, J. Editor of O & M Virtual Environments for the Training of People with Visual Impairment. In Proceedings of the Universal Access in Human-Computer Interaction, Theory, Methods and Tools, Orlando, FL, USA, 26–31 July 2019; Antona, M., Stephanidis, C., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 617–627. [Google Scholar]
  127. Espinoza, M.; Sánchez, J.; de Borba Campos, M. Videogaming Interaction for Mental Model Construction in Learners Who Are Blind. In Proceedings of the Universal Access in Human-Computer Interaction. Universal Access to Information and Knowledge, Heraklion, Crete, Greece, 22–27 June 2014; Stephanidis, C., Antona, M., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 525–536. [Google Scholar]
  128. Sánchez, J.; Espinoza, M.; Garrido, J.M. Videogaming for wayfinding skills in children who are blind. In Proceedings of the 9th International Conference on Disability, Virtual Reality & Associated Technologies, Laval, France, 10–12 September 2012; pp. 131–140. [Google Scholar]
  129. Cobo, A.; Guerrón, N.E.; Martín, C.; del Pozo, F.; Serrano, J.J. Differences between blind people’s cognitive maps after proximity and distant exploration of virtual environments. Comput. Hum. Behav. 2017, 77, 294–308. [Google Scholar] [CrossRef]
  130. Er, C.C.W.; Lau, B.T.; Zheng, P. An Audio and Haptic Feedback-Based Virtual Environment Spatial Navigation Learning Tool. In Proceedings of the 2018 Joint 10th International Conference on Soft Computing and Intelligent Systems (SCIS) and 19th International Symposium on Advanced Intelligent Systems (ISIS), Toyama, Japan, 5–8 December 2018; pp. 741–746. [Google Scholar]
  131. Guerrón, N.E.; Cobo, A.; Serrano Olmedo, J.J.; Martín, C. Sensitive interfaces for blind people in virtual visits inside unknown spaces. Int. J. Hum.Comput. Stud. 2020, 133, 13–25. [Google Scholar] [CrossRef]
  132. Balan, O.; Moldoveanu, A.; Moldoveanu, F.; Butean, A. Developing a navigational 3D audio game with hierarchical levels of difficulty for the visually impaired players. In Proceedings of the RoCHI, Bucharest, Romania, 24–25 September 2015; pp. 49–54. [Google Scholar]
  133. Lumbreras, M.; Sánchez, J. 3D Aural Interactive Hyperstories for Blind Children. Int. J. Virtual Real. 1999, 4, 18–26. [Google Scholar] [CrossRef] [Green Version]
  134. Lumbreras, M.; Sánchez, J. Interactive 3D sound hyperstories for blind children. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, Pittsburgh, PA, USA, 15–20 May 1999; pp. 318–325. [Google Scholar]
  135. Maidenbaum, S.; Levy-Tzedek, S.; Chebat, D.-R.; Amedi, A. Increasing Accessibility to the Blind of Virtual Environments, Using a Virtual Mobility Aid Based On the “EyeCane”: Feasibility Study. PLoS ONE 2013, 8, e72555. [Google Scholar] [CrossRef] [PubMed]
  136. Maidenbaum, S.; Buchs, G.; Abboud, S.; Lavi-Rotbain, O.; Amedi, A. Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution. PLoS ONE 2016, 11, e0147501. [Google Scholar] [CrossRef] [PubMed]
  137. Merabet, L.; Sánchez, J. Audio-based navigation using virtual environments: Combining technology and neuroscience. AER J. Res. Pract. Vis. Impair. Blind. 2009, 2, 128–137. [Google Scholar]
  138. Sánchez, J.; Darin, T.; Andrade, R.; Viana, W.; Gensel, J. Multimodal interfaces for improving the intellect of the blind. In Proceedings of the XX Congresso de Informática Educativa–TISE, Santiago, Chile, 1–3 December 2015; Volume 1, pp. 404–413. [Google Scholar]
  139. Sánchez, J.; Elías, M. Science Learning in Blind Children through Audio-Based Games. In Engineering the User Interface: From Research to Practice; Redondo, M., Bravo, C., Ortega, M., Eds.; Springer: London, UK, 2009; pp. 1–16. ISBN 978-1-84800-136-7. [Google Scholar]
  140. Sanchez, J.; Hassler, T. AudioMUD: A Multi-User Virtual Environment for Blind People. In Proceedings of the 2006 International Workshop on Virtual Rehabilitation, New York, NY, USA, 29–30 August 2006; pp. 64–71. [Google Scholar]
  141. Sánchez, J.; Lumbreras, M. Virtual Environment Interaction through 3D Audio by Blind Children. CyberPsychol. Behav. 1999, 2, 101–111. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  142. Sánchez, J.; Maureira, E. Subway mobility assistance tools for blind users. In Universal Access in Ambient Intelligence Environments; Springer: London, UK, 2007; pp. 386–404. [Google Scholar]
  143. Sánchez, J.; Tadres, A. Audio and haptic based virtual environments for orientation and mobility in people who are blind. In Proceedings of the 12th International ACM SIGACCESS Conference on Computers and Accessibility, Orlando, FL, USA, 25–27 October 2010; pp. 237–238. [Google Scholar]
  144. Villane, J.; Sánchez, J. 3D Virtual Environments for the Rehabilitation of the Blind. In Proceedings of the Universal Access in Human-Computer Interaction, San Diego, CA, USA, 19–24 July 2009; pp. 246–255. [Google Scholar]
  145. Allain, K.; Dado, B.; Van Gelderen, M.; Hokke, O.; Oliveira, M.; Bidarra, R.; Gaubitch, N.D.; Hendriks, R.C.; Kybartas, B. An audio game for training navigation skills of blind children. In Proceedings of the 2015 IEEE 2nd VR Workshop on Sonic Interactions for Virtual Environments (SIVE), Arles, France, 24 March 2015; pp. 1–4. [Google Scholar]
  146. Baker, R.M.; Ramos, K.; Turner, J.R. Game design for visually-impaired individuals: Creativity and innovation theories and sensory substitution devices influence on virtual and physical navigation skills. Ir. J. Technol. Enhanc. Learn. 2019, 4, 36–47. [Google Scholar] [CrossRef] [Green Version]
  147. Guerrón Paredes, N.E.; Cobo, A.; Martín, C.; Serrano, J.J. Methodology for Building Virtual Reality Mobile Applications for Blind People on Advanced Visits to Unknown Interior Spaces. In Proceedings of the 14th International Conference Mobile Learning, Lisbon, Portugal, 14–16 April 2018; pp. 3–14. [Google Scholar]
  148. Kreimeier, J.; Karg, P.; Götzelmann, T. BlindWalkVR: Formative Insights into Blind and Visually Impaired People’s VR Locomotion using Commercially Available Approaches. In Proceedings of the 13th ACM International Conference on PErvasive Technologies Related to Assistive Environments, Corfu, Greece, 30 June–3 July 2020; Article 29. pp. 1–8. [Google Scholar]
  149. Dong, M.; Guo, R. Towards understanding the capability of spatial audio feedback in virtual environments for people with visual impairments. In Proceedings of the 2016 IEEE 2nd Workshop on Everyday Virtual Reality (WEVR), Greenville, SC, USA, 20 March 2016; pp. 15–20. [Google Scholar]
Figure 1. Example to distinguish between grounded and wearable haptic feedback following the differentiation from [14]. On the left side one can see a grounded force feedback device (Sensable Phantom), and on the right side a wearable force feedback (SenseGlove). With the latter, the users’ hand is more flexible, but is not prevented from accidentally moving inside a virtual object.
Figure 1. Example to distinguish between grounded and wearable haptic feedback following the differentiation from [14]. On the left side one can see a grounded force feedback device (Sensable Phantom), and on the right side a wearable force feedback (SenseGlove). With the latter, the users’ hand is more flexible, but is not prevented from accidentally moving inside a virtual object.
Mti 04 00079 g001
Figure 2. Applied workflow for systematic literature search (adapted from [29]).
Figure 2. Applied workflow for systematic literature search (adapted from [29]).
Mti 04 00079 g002
Figure 3. Schematic representation of the distinction between small, medium and large-scale based on the users’ interface for spatial exploration.
Figure 3. Schematic representation of the distinction between small, medium and large-scale based on the users’ interface for spatial exploration.
Mti 04 00079 g003
Figure 4. Common examples for small, medium and large walkable and touchable VR applications following [40,54,57]. From left to right: Exocentric exploration with grounded force feedback, egocentric exploration with a virtual white cane in a tracked environment or by controlling an avatar with the keyboard or a game controller. The haptic exploration process of spatial information is in each case supplemented by audio feedback; as examples for spatial information, a floor plan, an outdoor scene or simple geometric shapes are shown.
Figure 4. Common examples for small, medium and large walkable and touchable VR applications following [40,54,57]. From left to right: Exocentric exploration with grounded force feedback, egocentric exploration with a virtual white cane in a tracked environment or by controlling an avatar with the keyboard or a game controller. The haptic exploration process of spatial information is in each case supplemented by audio feedback; as examples for spatial information, a floor plan, an outdoor scene or simple geometric shapes are shown.
Mti 04 00079 g004
Table 1. Definition and example of small, medium and large-scale according to parameters of the respective VEs to classify existing related work.
Table 1. Definition and example of small, medium and large-scale according to parameters of the respective VEs to classify existing related work.
ScaleSmallMediumLarge
Exploration InterfaceHand reachable with mostly grounded force feedbackRoom size tracked walkable area with non-grounded haptic feedbackController based relative locomotion with (non-)grounded haptic feedback
Common HardwareData glove or PhantomController (white cane simulation)Game controller, keyboard or joystick
PositioningAbsolute (avatar or hand)Absolute (avatar and hands)Relative (avatar)
ContentScaled to fit available spaceOnly section of a larger VE to fit in tracked areaBuilding or urban environment with no space limitations
Common exampleExploring charts and graphs [30,31]Train O+M skill in certain urban scene [32,33]Explore unknown building [34] or learn subway network [35]
Table 2. Clustering of related work based on the proposed taxonomy. In addition, a frequent example of application and evaluation is given for each cluster. In the first column, small scale works are marked green, medium scale blue and large scale works grey.
Table 2. Clustering of related work based on the proposed taxonomy. In addition, a frequent example of application and evaluation is given for each cluster. In the first column, small scale works are marked green, medium scale blue and large scale works grey.
ScaleExploration Interaction PerspectiveExemplary Application Scenario Frequent Evaluation & Metrics Relevant Work
SmallGrounded force feedbackEgocentricFeel local environment by means of an audio-haptic VEUsability, feasibility or O+M Questionnaire[65,91,92,93,94]
Vibrotactile feedbackExocentricExplore geometric shapes and floor plansObject identification and task load[84]
Grounded force feedbackExocentricGeneric virtual objects to be explored (mainly 3D content in real world)Object identification or understanding, very often only technical prototype[38,47,52,66,67,69,70,95,96,97,98,99,100,101,102]
Audio-haptic games (e.g., 3D memory, battleships)Feasibility study and function test[72,103,104,105,106]
Mathematical graphs, lines, diagrams and charts (mainly 2D content in real world)Level of detail of conveyed content and questionnaire of task load[12,19,30,31,46,47,48,49,50,51,72,107,108,109,110]
Miniature mapRebuild with physical properties and questionnaire of usability[47,72,75,80,92,111,112,113,114,115,116]
Explore VE which is proxy of real spaceExploration strategy and transfer of cognitive model to real space[34,47,57,61,68,78,117,118,119,120,121,122]
MediumWearable haptic feedbackEgocentricVirtual cane to explore true scale section of larger VERebuild with physical properties or measure usability[33,39,40,71,123]
Auditory walkableCreate acoustic proxy of real space or train navigationUsability and efficiency of training scenario or cognitive model[41,53,63,64,87,124,125]
SmartphoneTrain street crossing or create room for O+M classFeasibility and efficiency of training scenario[32,126]
LargeAudio-haptic feedbackEgocentricExplore VE which is proxy of real space with virtual caneTask load and usability[37,58,73,76,77,88,127,128]
SmartphoneTrain real path which is walkable in VERebuild with physical properties or real-world navigation[43,44,56,129,130,131]
Audio FeedbackExplore VE with controller or keyboard, oftentimes gaming approachUsability or cognitive load questionnaire, function test[16,35,42,54,55,59,60,62,74,79,81,85,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kreimeier, J.; Götzelmann, T. Two Decades of Touchable and Walkable Virtual Reality for Blind and Visually Impaired People: A High-Level Taxonomy. Multimodal Technol. Interact. 2020, 4, 79. https://doi.org/10.3390/mti4040079

AMA Style

Kreimeier J, Götzelmann T. Two Decades of Touchable and Walkable Virtual Reality for Blind and Visually Impaired People: A High-Level Taxonomy. Multimodal Technologies and Interaction. 2020; 4(4):79. https://doi.org/10.3390/mti4040079

Chicago/Turabian Style

Kreimeier, Julian, and Timo Götzelmann. 2020. "Two Decades of Touchable and Walkable Virtual Reality for Blind and Visually Impaired People: A High-Level Taxonomy" Multimodal Technologies and Interaction 4, no. 4: 79. https://doi.org/10.3390/mti4040079

Article Metrics

Back to TopTop