Next Article in Journal
Quantum Key Distribution with Post-Processing Driven by Physical Unclonable Functions
Next Article in Special Issue
Machine Learning-Supported Designing of Human–Machine Interfaces
Previous Article in Journal
Determination of the Probabilistic Properties of the Critical Fracture Energy of Concrete Integrating Scale Effect Aspects
Previous Article in Special Issue
A Parallel Multimodal Integration Framework and Application for Cake Shopping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Shared eHMI: Bridging Human–Machine Understanding in Autonomous Wheelchair Navigation

1
Department of Industrial Design, Guangdong University of Technology, Guangzhou 510090, China
2
Guangdong International Center of Advanced Design, Guangdong University of Technology, Guangzhou 510090, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(1), 463; https://doi.org/10.3390/app14010463
Submission received: 20 November 2023 / Revised: 28 December 2023 / Accepted: 3 January 2024 / Published: 4 January 2024
(This article belongs to the Special Issue New Insights into Human-Computer Interaction)

Abstract

:
As automated driving system (ADS) technology is adopted in wheelchairs, clarity on the vehicle’s imminent path becomes essential for both users and pedestrians. For users, understanding the imminent path helps mitigate anxiety and facilitates real-time adjustments. For pedestrians, this insight aids in predicting their next move when near the wheelchair. This study introduces an on-ground projection-based shared eHMI approach for autonomous wheelchairs. By visualizing imminent motion intentions on the ground by integrating real and virtual elements, the approach quickly clarifies wheelchair behaviors for all parties, promoting proactive measures to reduce collision risks and ensure smooth wheelchair driving. To explore the practical application of the shared eHMI, a user interface was designed and incorporated into an autonomous wheelchair simulation platform. An observation-based pilot study was conducted with both experienced wheelchair users and pedestrians using structured questionnaires to assess the usability, user experience, and social acceptance of this interaction. The results indicate that the proposed shared eHMI offers clearer motion intentions display and appeal, emphasizing its potential contribution to the field. Future work should focus on improving visibility, practicality, safety, and trust in autonomous wheelchair interactions.

1. Introduction

Wheelchairs are essential for people with mobility impairments, providing health and independence [1]. However, operating a wheelchair can be challenging for the elderly and those with upper limb injuries. Moreover, wheelchair users have a higher probability of upper limb injuries due to the use of their arms [2]. Additionally, the joystick controllers of electric wheelchairs are difficult for users with hand impairments [3]. If autonomous driving functionality is applied to electric wheelchairs, users could move to their desired destinations without the need to manually operate the wheelchair [2]. This would reduce their travel burden and enhance their independence in mobility.
The emergence of automated driving system (ADS) technology signifies a transformative epoch in intelligent autonomous mobility, especially in personal devices such as electric wheelchairs [2,4,5]. The adoption of autonomous driving technology enhances the convenience and independence of wheelchair users; however, these advancements introduce novel paradigms and conundrums in human–machine interface dynamics [6]. Contrary to conventional manual joystick mechanisms, autonomous wheelchair users often exhibit apprehension towards machine-dictated trajectories, which can appear as an opaque decision-making algorithm [7,8]. This transparency refers to “the degree of shared intent and shared awareness between a human and a machine [9].” Therefore, to achieve such transparency, the automated driving system needs to convey its capabilities, motion intentions, and anticipated actions to the user.
Notably, the terrains navigated by wheelchairs tend to be more complex than conventional vehicular roads [7,10]. The human visual system excels at interpreting these terrain nuances, often surpassing the discernment of standard machine sensors. Such intricacies, when amalgamated with instantaneous path determination, occasionally necessitate human interference, either through minor AI route adjustments or by re-assuming control. As shown in Figure 1, despite the commendable performance of ADS technology in many scenarios, there are still inherent challenges when confronting intricate road conditions and social interactions. This highlights the significant cognitive burden on users to comprehend the anticipated behaviors. It also reflects how transparent communication aids users in making decisions and intervening in the actions of the autonomous driving system. Moreover, it is essential for pedestrians to predict the autonomous wheelchair’s anticipated actions, facilitating their own appropriate adjustments to prevent potential mishaps [11,12].
Thus, further study and exploration are required on how to effectively visualize the driving status and intentions of autonomous wheelchairs. This is crucial for enhancing the understanding of wheelchair users and other road users about the behaviors of autonomous wheelchairs and thus promoting transparent communication. Currently, there is a gap in the design and research specifically aimed at the visualization of driving status and intentions for autonomous wheelchairs. Additionally, there is a need to evaluate the impact of this human–machine interaction method on the attitudes and acceptance levels of both wheelchair users and other road users.
In this research, we introduce a novel shared external human–machine interaction (eHMI) approach for autonomous wheelchairs, based on on-ground projection. This approach, which combines real-world and virtual elements, communicates the motion intentions of the autonomous wheelchair to the users and pedestrians around them. It aims to enhance the trust of wheelchair users in the autonomous wheelchair and improve safety among all road users, and fostering friendly interactions between them.
The contributions of this paper include the following:
  • Proposal and development of a unique eHMI approach: We developed an on-ground projection-based shared eHMI system for autonomous wheelchairs. This approach is unique in its method of visualizing motion intentions, enhancing clarity and safety in autonomous wheelchair navigation.
  • Implementation and evaluation of an autonomous wheelchair simulation platform: We implemented and evaluated an autonomous wheelchair simulation platform of the shared eHMI in an autonomous wheelchair. This involved exploring visual communication strategies for effective projection interaction, adding depth to the field of human–machine interaction research.
  • Enhancing accessibility and user experience: Our system facilitates barrier-free travel and improves the overall experience for wheelchair users. This is achieved by providing clear, unobtrusive communication through the shared eHMI system.
  • Advancing understanding of human–machine collaboration: Our findings offer insights into the importance of transparent communication and human–machine collaboration. We discuss how these factors influence trust and safety in autonomous driving contexts and propose ways to improve pedestrian trust in autonomous wheelchairs.
These contributions underscore the significance of our work in advancing autonomous wheelchair human–machine interactions, with a specific focus on enhancing user safety, communication, and trust.

2. Related Works

2.1. eHMI and Projection Interaction of Autonomous Vehicles

External human–machine interaction (eHMI) arises as a solution for the vehicle to transmit information to potentially dangerous agents in the vicinity, representing a newly developing direction of autonomous driving [13]. eHMI encompasses the interaction design between autonomous vehicles (AVs), pedestrians, and other road users, primarily involving the communication of the vehicle’s status, intentions, and interaction modalities with pedestrians [6].
As ADS technology advances, the clarity and comprehensibility of such interactions become pivotal for road safety. Papakostopoulos et al. conducted an experiment and reported that eHMI has a positive effect on other drivers’ ability to infer the AV motion intentions [14]. In addition, the congruence of eHMI with vehicle kinematics has been shown to be crucial. Rettenmaier et al. found that when eHMI signals matched vehicle movements, drivers experienced fewer crashes and passed more quickly than without eHMI [15]. Faas et al.’s study demonstrated that any eHMI contributes to a more positive feeling towards AVs compared to the baseline condition without eHMI [16]. These indicate that eHMI facilitates effective communication, reducing the uncertainty in intention estimation and thereby preventing accidents.
The design of eHMI could be multiform and multimodal. External interaction screens, matrix light, and projection interfaces are widely used in eHMI design due to their versatile interaction capabilities with diverse road users [17]. Among these, common visual communication modalities encompass emulation of human driver communication cues, textual interfaces, graphical interfaces, lighting interfaces, and projection interfaces [18,19,20,21]. Dou et al. evaluated the multimodal eHMI of AVs in virtual reality. Participants, as pedestrians, felt an increased sense of safety and showed a preference for the visual features of arrows [22], offering insights for future design. Dietrich et al. explored the use of projection-based methods in virtual environments to facilitate interactions between AVs and pedestrians [23]. Additionally, on-market vehicles have experimented with headlight projections to convey information. These highlight the implementation of projection as a visual form of eHMI, suggesting further potential for exploration in this area.
On lightweight vehicles such as wheelchairs and electric bicycles, eHMI in the form of on-ground projections is more reasonable. Utilizing on-ground projections, information can be directly exhibited on the roadway, offering explicit motion intentions or navigation paths. This approach of integrating interfaces with the environment through augmented reality offers communication that is intuitive and visibly clear, unaffected by the movement of vehicles [21,24]. Notably, augmented reality technology and multimodal interfaces are now considered effective means for ensuring smooth and comfortable interactions [25,26]. This provides insights for the exploration of more complex and engaging interfaces, especially through the method of on-ground projections.
eHMI has more value in social attributes and interaction design. According to the research of Clercq et al. and Colley et al [27,28], eHMI in the form of on-ground projections, particularly those conveying functions of autonomous driving mechanisms like parking and alerting pedestrians, are predominantly accepted by the majority of individuals. The experiment by Wang et al. [29] showed that this form of eHMI is safer, more helpful, and better at maintaining users’ attention on the route.
Interestingly, for wheelchairs with on-ground projection-based eHMIs, this system serves not only as an external interaction tool but also plays a pivotal role in the user–wheelchair interaction due to the visibility of the projection in public spaces. Wheelchair users can observe the system’s status through the projections and also interact with it, modifying its operational parameters, thus using it as a user interface for the wheelchairs. In the context of autonomous driving, ensuring the safety of autonomous wheelchairs on the road requires further exploration of their user and external interactions. There is also a need to develop specific and universally accepted UI system designs.

2.2. Shared-Control Wheelchair

Similar to the classification of autonomous driving, smart wheelchairs can be categorized into fully-autonomous and semi-autonomous control modes based on the degree of allocation of driving rights [30,31]. Different autonomous driving levels have distinct control strategies, influencing user interaction and eHMI designs; this impacts the communication of wheelchair information to users, user control mechanisms, and the information shared with other road users.
In a fully-autonomous control mode, there are three possible scenarios for managing the allocation of driving rights for wheelchairs. Firstly, the driving right is completely handed over to the autonomous system, users simply indicate their destination, and the wheelchair navigates there without collisions [32]. Secondly, external collaborative control allows experienced individuals to remotely or directly control the wheelchair. Additionally, the driving rights of wheelchairs may also be controlled by a centralized system at the upper level, which uniformly allocates and manages the control of multiple wheelchairs [33,34]. This approach facilitates large-scale management, ideal for settings like hospitals and nursing homes that need centralized wheelchair oversight. Fully-autonomous wheelchairs have been found to overlook user capabilities and intentions. On the other hand, in semi-autonomous wheelchairs, users tend to favor maintaining control during operation and high-level decision-making processes [35].
The shared-control wheelchair employs a semi-autonomous control mode where the user and the autonomous system collaboratively determine the final motion of the wheelchair, aiming for a safer, more efficient, and comfortable driving experience [36,37]. The shared-control wheelchair will transfer part of the driving rights to the autonomous system. When the autonomous system is unable to determine or the user has a unique preference, the driving rights will be transferred back to the user for specific operations. Wang et al. discussed the human driver model and interaction strategy in vehicle sharing. This protects user experience when technology applications are limited [38]. Xi et al. [39] proposed a reinforcement learning-based control method that can adjust the control weights between users and wheelchairs to meet the requirements of fully utilizing user operational capabilities. High automation makes it suitable for wheelchair users who are unable to perform manual operations, such as those with high paraplegia and cognitive impairment.
Shared control encompasses two primary strategies. In the first, known as hierarchical shared control, users dictate high-level actions while the wheelchair autonomously handles collision avoidance [40]. The second strategy merges inputs from both the user and the motion planner, moving the wheelchair only when commands from both are present, thereby enhancing user control and collaboration [31,37,41,42]. Guided by these strategies to foster a more suitable human–machine collaboration for shared-control wheelchairs, the user retains higher-level decision-making while allowing the automated system to handle simple driving tasks. The wheelchair autonomously moves only when the user activates the autonomous mode. It automatically handles basic navigation by planning and selecting optimal routes based on user-specified destinations. Users can influence driving settings and motion planning with advanced directives in autonomous mode, such as adding temporary waypoints, yielding to pedestrians, or adjusting speed. Likewise, users have the authority to pause autonomous control for more detailed adjustments to the wheelchair’s behaviors. These control methods ensure a harmonious balance between automation and user preferences in human–machine interaction, optimizing safety and user experience. And the driving information is communicated to both users and pedestrians through internal and external interfaces, ensuring clear and effective feedback for the success of the shared control systems [43,44].

2.3. Pedestrians’ Concerns or Opinions about the Uncertainty of Autonomous Vehicles

The majority of previous research has focused on user acceptance of automated vehicles (AVs). Kaye et al. examined the receptiveness of pedestrians to AVs, partially filling the gap in research on pedestrian acceptance of AVs [45]. Pedestrian behavior is challenging to predict due to its dynamic nature, lack of training, and a tendency to disobey rules in certain situations [13]. Pedestrians may feel unsafe interacting with autonomous personal mobility vehicles (APMVs) when uncertain about their driving intentions. A lack of clarity regarding the APMV’s intentions can lead to hesitation or even feelings of danger among pedestrians [46]. The understanding of driving intentions by pedestrians is by means of trust in human–machine interactions [47]. AVs are capable of reliably detecting pedestrians, but it is challenging for both AVs and pedestrians to predict each other’s intentions.
Epke et al. examined the effects of using explicit hand gestures and receptive eHMI in the interaction between pedestrians and AVs. The eHMI was proved to facilitate participants in predicting the AV’s behaviors [48]. In exploring interactions between pedestrians and autonomous wheelchairs, both Watanabe et al. and Moondeep et al. identified that explicitly conveying the wheelchair’s movement intentions, through methods such as ground-projected light paths or red projected arrows, notably enhanced the smoothness of interactions and augmented pedestrians cooperation [49,50]. This underscores the significance of transparently communicating driving intentions in alleviating pedestrians concerns regarding the uncertainty of autonomous vehicles. However, interactions and scenarios between autonomous wheelchairs and pedestrians, such as yielding behaviors of the wheelchair, have not been extensively studied. Moreover, the systematic exploration of eHMI used in autonomous wheelchair is needed, along with research into pedestrian acceptance based on these systems.

2.4. The Sociology of Wheelchair eHMI and the Image of Wheelchair Users

With the gradual development and popularization of the shared-control wheelchair, ADS technology will face more challenges and new design opportunities. In this context, eHMI holds potential to enhance public trust in such technology [13]. eHMI not only offers a clear interactive interface, but also helps pedestrians and other road users understand the intentions and behaviors of vehicles [51]. This can reduce anxiety and concerns about the transfer of control to machines or others within the context of shared control between drivers and the autonomous system. Due to the feature of real-time interactivity, eHMI could become the interface to communicate with society, enhance society trust and help vehicle users convey their emotions and intentions.
Wheelchairs, as vital empowering tools, also represent a continual visible sign of disability, attracting unwanted attention and potentially leading to social stigma [52]. Wheelchair users, sensitive to this, often prefer visual over auditory communication in their daily interactions to avoid such issues [53]. As a vulnerable group, wheelchair users require enhanced driving support and comfortable, engaging external interactions for a valued and self-acknowledged image in society. The experience of users with on-ground projection interfaces is worth exploring to investigate how eHMI can enhance societal understanding and trust in autonomous wheelchairs, fostering more empathetic and friendly connections between users and society.

3. Our Method

3.1. Featured Challenges and Solutions

3.1.1. Sensory Capability Limitations and Human-in-the-Loop Integration

Autonomous wheelchairs predominantly utilize robotics technology, integrating millimeter-wave radar and visual cameras to interpret road conditions. Although these sensors proficiently identify upright structures, their capability to decipher intricate road surfaces remains circumscribed. For instance, detecting deformities like depressions on mud-paved roads poses a significant challenge [54]. However, Human decision-making typically relies on a myriad of contextual factors, and many of these factors are extremely difficult to distill into discrete data streams and assign different weights. This level of nuanced decision-making is challenging for autonomous systems to fully replicate, and they are also susceptible to errors and biases, including false positives and omissions [55]. Consequently, if the system autonomously makes decisions, the ramifications of incorrect choices could lead to extremely serious consequences [56].
The concept of “Human-in-the-Loop” (HITL) refers to the understanding that artificial intelligence, due to its limitations, cannot handle all situations and thus cannot replace humans in real-world applications. Given the robustness and adaptability of humans in complex scenarios, there is a consideration to incorporate human input into the decision-making loop of artificial intelligence systems [57]. To achieve meaningful human control in a HITL system, it is crucial to understand how humans can be involved in autonomous driving decisions from a multidisciplinary and interdisciplinary perspective. This approach considers the intersection of psychology, cognitive science, artificial intelligence, and engineering [57,58].
User’s confidence in system cognition is paramount. Potential discrepancies between sensor readings and the system’s interpretation necessitate users, akin to semi-automobile drivers, to consistently monitor both the road and the system’s reactions. The continuous involvement ensures the system’s accurate comprehension and the anticipated vehicular response. From a cognitive science perspective, decision-making is an interactive and continuous dynamic high-level cognitive process involving communication between humans and their environment. It is founded on basic cognitive processes such as perception, memory, and attention, as well as core selection processes [58]. These can be enhanced with the aid of sensors and automatic systems, but the human role remains indispensable throughout the operational process, influencing every decision cycle of the system. This is particularly true in dynamic, highly complex, or uncertain environments where machine performance may be impacted [59]. The human-in-the-loop (HITL) approach delegates tasks that are challenging for machines yet simplistic for humans, leveraging human aptitude in nuanced environmental perception, adaptive decision-making, and complex pattern recognition [60]. Merging the high-efficiency performance of sensors and autonomous mechanism with human capabilities provides a highly adaptive solution for navigating scenarios in autonomous wheelchairs, thereby enhancing the overall quality of the system’s perception and responsiveness.

3.1.2. System Visibility Deficits and Transparent Communication via Visual Projections

According to Jakob Nielsen’s usability heuristics [61], system status visibility is paramount. Such visibility, embodying communication and transparency, is indispensable for users expecting predictability and control. Users must be cognizant of their position within an ongoing task to avert undue apprehension. Beyond mere status communication, users should anticipate system behaviors via pertinent feedback. Communicating the current state allows users to feel in control of the system, take appropriate actions to reach their goal, and ultimately trust the autonomous system [62].
Additionally, in autonomous driving technology, transparency is vital for enhancing the quality of interaction with the vehicle system and for improving trust and situational awareness. Effective and reliable communication of the system status and information has a significantly positive impact on both the behavioral and psychological aspects of users. It enhances the cooperative behavior between users and autonomous vehicles and promotes trust. Transparency not only affects the efficiency of the system but also relates to user acceptance and safety [63].
Our proposed shared interaction communicates anticipated wheelchair movements through unintrusive visual feedback. By projecting the imminent path composed of computer-rendered virtual elements, users can juxtapose projected trajectories with real-time scenarios. This assists users in making swift adjustments based on the machine-driven decisions.

3.1.3. Necessity for Efficacious External Interaction and Shared eHMI

Wheelchairs cohabitate with pedestrians in non-motorized zones. Pedestrian apprehension, stemming from trust deficits and ambiguity regarding wheelchair intentions, mirrors automotive scenarios [46]. Pedestrians, analogously to drivers responding to external signals, prefer real-time insights into autonomous wheelchair motion trajectories. Our on-ground projection system, functioning as a form of augmented reality, meets eHMI requisites by projecting the imminent path to inform pedestrians. This transparent communication aids pedestrians in comprehending wheelchair intentions and adjusting accordingly, attenuating potential anxieties.
The adoption of this shared eHMI strategy significantly enhances reciprocal comprehension and mutual situational awareness, leading to a synergistic and cooperative dynamic among users, pedestrians and the autonomous wheelchair system.

3.1.4. Trust Deficit, HITL, and Swift Takeover

The terrains navigated by wheelchairs are inherently more convoluted than those navigated by automated vehicles. Presently, machine sensors are not as adept at recognizing subtle details as humans, especially in discerning minor road impediments like rain-induced puddles, cracked sewer covers, and irregularities on mud-paved roads. While these nuances may not significantly alter route algorithms, they often lead to a trust deficit among users who might opt for detours due to safety and comfort concerns.
Recognizing this complexity and the limitations of current machine sensors technology, The HITL solution, which embodies human–machine collaboration, capitalizes on human superiority in hazard detection, from unpredictable obstacles like tethered animals to structural hazards. Additionally, path quality assessment intertwines with personal predilections. Many might eschew inconsequential waterlogged patches. In instances of unappealing paths, users retain the prerogative to dictate immediate modifications or resume control, ensuring not only safety but also adherence to personal preferences and comfort levels.

3.2. On-Ground Projection-Based Shared eHMI for Autonomous Wheelchair

To enhance the understanding of wheelchair users and other road users regarding the behavior of autonomous wheelchairs and promote transparent communication, thereby boosting their trust in autonomous wheelchairs and improving safe interaction during road-sharing, we propose a ground-projection-based eHMI approach. This method is designed for sharing the visualization of the movement status and intentions of autonomous wheelchairs.
This innovation aims to transparently and distinctly convey the intentions of the wheelchair, ensuring a comprehensive understanding for both users and pedestrians, thereby enriching the interactive experience. A conceptual outline of this approach is presented in Figure 2.
As these projections integrate real-time environmental changes and present a varied color and pattern spectrum, this augmented visual feedback significantly heightens visibility and paves the way for a symbiotic human–machine relationship [64,65].
Bingqing Zhang‘s research introduced four conceptual themes pertinent to autonomous wheelchair users: type of information needed, interaction modality, interface location, and adaptable design [42]. Based on this, we are considering the types of information interfaces and interaction content to provide for autonomous wheelchair users. Moreover, it is imperative to consider the dynamics between autonomous wheelchairs and other road users. Pedestrians, as the most targeted type of road users [66], also need to access information from autonomous wheelchairs. The incorporation of “status + intention eHMI” augments the user experience, enhances perceived system intelligence, elevates transparency, and contributes to an enhanced perception of safety [16].
We referenced the foundational eHMI use cases framework designed for autonomous cars, where the main suggestions include communicating the vehicle’s status, its intentions, and interactions with pedestrians. The first two employ non-specific strategies, while interactions with pedestrians are more targeted and preserve the negotiation of right-of-way between autonomous vehicles and pedestrians [6]. Based on this framework, we modified and meticulously designed an eHMI framework specific to wheelchair scenarios, incorporating more detailed elements. This new framework synergizes HMI and eHMI to achieve more comprehensive user scenarios.
As shown in Table 1, it includes three categories of use cases: wheelchair’s status for user, wheelchair’s intent, and wheelchair–pedestrian interaction. Based on these use cases, we identified specific scenarios that cover the majority of travel situations. For each scenario, we determined the necessary information to visualize, focusing on the problems users need to solve and the goals they aim to achieve. Specifically, taking the most intuitive and clear navigation path in the interface as an example, it needs to be displayed in multiple scenarios within the intent communication. These scenarios include the wheelchair starting, driving, slowing down or stopping, and turning. This highlights its vital role in conveying real-time path changes, turning, and behavioral intentions of the wheelchair.
This design framework focuses on not just conveying wheelchair status and intentions but also elevating the overall user experience. It incorporates human-in-the-loop considerations for situations like poor road conditions, advising on mode adjustments to align with user preferences.
In line with the insights from Bingqing Zhang’s design implications, the interface should be context-responsive, efficiently channel a broad information range, and preclude information saturation. It is paramount that this interface harmoniously integrates into societal settings, emphasizing user empathy and societal receptiveness, thereby nurturing open communication between users and pedestrians [42].
When designing the user interface for autonomous wheelchairs, the style, position, and size of UI elements, as well as their interaction logic, are determined based on the constructed use scenarios. Important information should be placed in prominent positions, and the size of elements should be sufficient to ensure visibility and clarity. This approach aims to create a pleasant user experience and practical interface functionality. Figure 3 shows the devised user interface, specifically adjusted for wheelchair-specific contexts, as being illustrated in the following:
  • An evolved iteration of the contemporary electric wheelchair interface, preserving essential metrics while integrating innovative features such as global direction, mobile-synced navigation guidance, dynamic prompts, and pedestrian-inclusive indications;
  • The main interface highlights a seamless navigation path, with the left pane presenting driving metrics—including a global compass and speed dial—and the right pane offering standard data, such as battery life and destination proximity. Event-driven notifications are displayed contextually within the right pane. When turning, the wheelchair projects turning arrow and textual information for both the user and pedestrians, displayed in either the top left or top right of the interface;
  • Pedestrian-inclusive modules: A salient autonomous mode acts as a status beacon for pedestrians. During encounters with pedestrians, the wheelchair either delineates a bypass navigation path and a turning arrow or projects a safety line for pedestrians to yield, adjustable according to user preferences;
  • Adaptive user interface allows for scenario-specific transitions, ensuring immediate, pertinent information display, fostering user engagement and pedestrian cooperation. Additionally, as shown in Figure 4, a multifunctional user control panel has been developed on the basis of traditional electric wheelchair controller. This panel not only retains the traditional functions but also incorporates new elements, including buttons integrated into the joystick to switch between autonomous and manual modes. The other buttons are used for activating the projection feature, enabling yielding and turning on standby modes. The interface, based on customizable buttons, serves as a medium for input and output between the user and the system, allowing users to make adjustments to the system.

3.3. Autonomous Wheelchair Simulation Platform

To comprehensively evaluate the usability and practicality of the interaction design, we developed a dedicated autonomous driving simulation platform for conducting user tests and quantitative assessments. Our endeavor commenced with the modification of a consumer-grade electric wheelchair, culminating in the creation of an autonomous wheelchair simulation platform specifically tailored for user interface and interaction testing. Central to the simulation platform’s control mechanism is a wheelchair smart board, which is essentially an ESP32 microcontroller. The host computer, communicating through a local area network, dispatched instructions pertaining to the rotation direction and velocity of the wheels to this microcontroller. In response, the microcontroller generated pulse width modulation (PWM) signals. After these signals were amplified by high-power MOS tubes, the output voltage reached 24 V. Variations in the duty cycle were employed to adjust the output voltage, consequently steering the wheel motors and harnessing differential control to facilitate wheelchair’s movement for navigation. The primary software, conceived and operationalized within the Unity platform, ran seamlessly on a Windows computer. Leveraging the ultra-wideband (UWB) indoor positioning system [67], it ascertained the wheelchair’s spatial coordinates. Concurrently, the system’s orientation was derived from an IMU, effectuating a comprehensive closed-loop control over the wheelchair’s positional and angular parameters.
The experimental environment in Unity was meticulously architected with predefined wheelchair waypoints, coordinates for obstacles, and pedestrians’ positions and trajectories to correspond directly with the real-world scenarios of wheelchair navigation. Upon initiating navigation, dynamic navigation route is generated based on the current position of the wheelchair, the waypoints, and the destination points set in the experiment.
Wheelchair motion is categorized into two distinct states: linear motion and turning. If the angular difference between the wheelchair’s current alignment and the immediate forward direction of the planned navigation route remains under a set threshold, the wheelchair perpetuates its linear motion, adjusting the differential wheel speeds to ensure path fidelity. Surpassing this threshold triggers the wheelchair into a turning state, momentarily halting forward motion. The wheels rotate in opposite directions until alignment is re-established, reverting the wheelchair back to its linear motion state. Navigation data, operational speeds, and other salient information are manifested as user interface components within Unity and projected onto the ground using a wireless mini-projector affixed to the wheelchair’s side. The autonomous wheelchair simulation platform is shown in Figure 5, a demo video of the working simulation platform and its interactive behavior can be viewed in the supplemental file.
Both the user interface and interaction design were sculpted within the Unity ecosystem. User interface components were crafted using Unity’s canvas, with logic injections facilitating their dynamic visibility. In the algorithm implemented for the navigation path, the Unity built-in NavMesh component is utilized to calculate the path from the wheelchair through waypoints to the destination. The LineRenderer component is used to visualize the path. The DataExchanger component receives external data, such as position and angle, to update the wheelchair’s position and rotation. Position updates are smoothed to avoid sudden movements. Additionally, dynamic adjustments to LineRenderer and the corresponding shader are made. These adjustments alter the visual effects of the navigation path based on the wheelchair’s speed and automatically update the path according to the distance between the wheelchair and the waypoints.
In the process of projecting the user interface onto the ground, perceptual adjustments were orchestrated, aligning with the typical visual interpretations of the user base. Two cardinal principles are observed during turning behaviors: Firstly, turning indicators, symbolizing preplanned waypoints, are displaced to ensure they remain outside the wheelchair’s immediate footprint and retain visibility. Secondly, these cues maintain alignment with the intended direction of the turn, emphasizing the forward navigation path.

3.4. Pilot Study via Video-Based Experiments and Interviews

In the pilot study, we implemented video-based experiments with our autonomous wheelchair simulation platform [68,69]. Participants engaged with the experiments by watching videos, immersing themselves in first-person perspective scenarios, and were also presented with third-person perspectives for holistic scene comprehension. Following the video presentations, participants were invited to complete questionnaires and partake in interviews for further insights.
  • Experiment 1: Viewpoints of Wheelchair Users
The aim of Experiment 1 was to showcase typical user scenarios of the autonomous wheelchair with the proposed shared eHMI, delineating wheelchair behaviors and interactions with pedestrians. Two distinct experimental routes, as depicted in Figure 6, were chosen to test varied wheelchair behaviors, offering a comprehensive overview of the user interface and interaction designs. In Route 1, two obstacles were placed: a curb to simulate the road’s bend and a general route obstacle to represent unexpected impediments encountered during typical navigational routes. These obstacles disrupted the autonomous navigation, necessitating the autonomous wheelchair to execute tasks like moving straight, turning, and obstacle avoidance. Route 2 was designed to simulate scenarios of coexistence between wheelchairs and pedestrians, incorporating both stationary and moving pedestrian behaviors, to assess the wheelchair’s behaviors in circumventing and yielding to different pedestrian conditions.
Experiment 1 focused on the assessment of on-ground projection interactions within the realm of autonomous driving, alongside gauging user attitudes and experiential metrics. Initially, participants were familiarized with the components of the projection interface, aiding their comprehension of user interface information. Following this, a 5.5 min video was shown to the participants, integrating visual and auditory components. This video, which was segmented into three distinct parts, immersed the participants in the first-person perspective of a user interacting with the autonomous wheelchair. The first segment of the video depicted the autonomous wheelchair navigating along Route 1 without the on-ground projection interaction. The following segments sequentially presented the wheelchair’s navigation on Route 1 and then Route 2 with the projection feature engaged.
2.
Experiment 2: Viewpoints of pedestrians
The purpose of Experiment 2 was to assess the acceptance of pedestrians when interacting with an autonomous wheelchair equipped with on-ground projection technology. This experiment involved a larger number of participants who were invited to engage from the first-person perspective of both stationary and moving pedestrians interacting with an autonomous wheelchair.
The video shown to participants was segmented into two distinct parts: In the first segment, participants were shown sequentially embodying the roles of stationary and moving pedestrians as they interacted with an autonomous wheelchair without projection interaction. The second segment presented interactions between pedestrians and the autonomous wheelchair with the projection feature engaged. The video for Experiment 2 corresponded to the pedestrian perspective depicted in the third segment of Experiment 1. Following the viewing, participants were invited to complete a questionnaire too.

3.5. Measures

Our study was designed to explore user and pedestrian interactions with an autonomous wheelchair, focusing on both the usability of the shared eHMI and the psychological impact of its use.
We adopted the System Usability Scale (SUS) to evaluate the usability of the wheelchair’s on-ground projection interaction within the context of autonomous driving [70]. Relying on a modified questionnaire from Andreas Löcken and colleagues, we measured participants’ attitudes towards the wheelchair with shared eHMI interaction [21]. Participants used a five-point Likert Scale (1 = strongly disagree, 5 = strongly agree) to express their agreement or disagreement with the provided statements, consisting of seven items as detailed in Table A1. Additionally, the Short User Experience Questionnaire (UEQ-S) was deployed to measure users’ experiential metrics [71]. Post-experiment, semi-structured interviews delved into participants’ varied sentiments towards projection interactions, exploring aspects like safety, trust, acceptance, concerns, practicality, and enjoyment. We also sought participants’ views on potential issues, enhancements, and the overall necessity and helpfulness of interactive technologies.
For measuring pedestrian acceptance in Experiment 2, we utilized a modified questionnaire based on the UTAUT model, encompassing dimensions like performance expectancy, effort expectancy, and intentions [72,73]. Additionally, we employed the UEQ-S to gauge pedestrians’ experience with the projection interaction.

3.6. Participants

In Experiment 1, as shown in Table 2, we recruited 20 participants, with an even gender distribution, ranging in age from 23 to 82 (mean: 36, SD: 13.695, median: 31.5), all residing in China. They were grouped as either novice or experienced based on their wheelchair usage duration. Recruitment was achieved via acquaintances, local disability organizations, and online media. All participants adopted the perspective of a wheelchair user, filling out questionnaires and engaging in semi-structured interviews post-experiment. And they were compensated with an approximate equivalent of USD 14.
For Experiment 2, which focused on pedestrian acceptance, a total of 107 participants (66 males and 41 females) with an average age of 20 were recruited from the general public. Adopting the perspective of pedestrians, they completed questionnaires following the experiment.

3.7. Procedure

The procedure for Experiment 1 entailed:
Orientation and guidance: Staff introduced the experiment’s purpose, research context, flow, as well as an understanding of the relevant video interface and design elements. Participants were informed and provided consent.
Video experiment: participants remotely watched videos of the wheelchair’s projection interaction, immersing themselves in the scenarios while focusing on their feelings.
Questionnaire and interview: Participants completed the questionnaire and then engaged in a semi-structured interview to share their subjective views, allowing us to grasp the subtle differences in their attitudes and the underlying reasons.
For Experiment 2, general participants were briefed about the experiment’s purpose and research context. They watched the videos from a pedestrian’s perspective and were subsequently invited to fill out a questionnaire.

4. Results and Discussion

4.1. Questionnaires for Wheelchair Users

Figure 7 shows the SUS questionnaire results from 20 participants, with a mean score of 63. Most scores are between 60 and 80, indicating general acceptance of the proposed shared eHMI system, although not ideal. Notably, seven participants voted below 60, who were then the focus of in-depth study of interview materials. Item analysis found that aspects related to the complexity of the projection interaction, and the consistency of graphics and interactions with habits and expectations, received slightly negative feedback.
The users’ attitude questionnaire results in Figure 8 reveal that most participants were positive about the smart projection wheelchair, showing openness to new technology. However, some had practicality concerns. Many users were worried about the reliability of entrusting their safety to the wheelchair and preferred to retain control, reflecting a need to build trust in fully ADS technology. Despite some risk concerns, the majority were willing to try the smart wheelchair, believing it might outperform their manual operation skills with traditional electric wheelchairs.
The UEQ-S questionnaire results reveal that the pragmatic quality is 1.2, the hedonic quality is 1.7, and the overall score is 1.45. Users gave positive feedback on both its pragmatic and hedonic quality, but pragmatic quality scored lower, indicating that users experienced more positive emotional responses during use, but the functionality could be improved.

4.2. Questionnaires for Pedestrians

The pedestrian acceptance questionnaire results shown in Figure 9 indicate most participants were positive towards our wheelchair. Over 85% believed the projection interaction can help pedestrians avoid risks and react quickly, and most found it easy to interact with the wheelchair. Notably, only about 70% were willing to cross the road in front of the autonomous wheelchair, as evidenced by nearly 30% finding it difficult to cross the road in question 4 of Figure 9.
The UEQ-S questionnaire results for pedestrian reveal that the pragmatic quality is 1.37, the hedonic quality is 1.17, and the overall score is 1.26. The higher score in pragmatism compared to hedonism reflects pedestrians’ recognition of the shared eHMI’s practicality, especially in enhancing safety.

4.3. Discussion

4.3.1. Enhancing Understanding and Safety through Transparent Communication

The complexity of wheelchair scenarios presents significant challenges for autonomous driving. Emotional concerns about potential risks, such as the shadow of disability, pedestrian collision and machine malfunction experiences, and machine malfunctions, may decrease users’ sense of safety, trust, and acceptance of autonomous driving. Research indicates that promoting transparent communication can increase user trust, safety, and acceptance of autonomous driving systems [63].
The shared eHMI enhances system transparency by projecting the user interface, showing the anticipated path and interpreting the world, thereby informing users about the immediate driving statuses and intentions of the autonomous wheelchair. This approach allows users to observe, feel, and understand the behaviors of the autonomous mechanism more intuitively compared to relying on memory or mobile navigation.
Some participants reported that the user interface’s information exceeded expectations, enhancing the sense of intelligence and helping build trust. Specifically, clear navigation path helped alleviate concerns about getting lost, real-time feedback on speed and destination proximity were very helpful, and obstacle avoidance and yield behavior reduced risks of falls and collisions with obstacles or pedestrians. Most participants appreciated its innovation, but the UEQ-S results showed a higher hedonic than pragmatic quality, likely due to limited simulated scenarios and absence of real-world testing.

4.3.2. Human–Machine Collaboration Reduces the Burden While Respecting Driving Preferences

In interviews, participants expressed concerns about the reliability of ADS technology applied to wheelchairs, while also hoping for machine assistance based on the understanding of the autonomous wheelchair’s intentions. We established a human–machine collaboration relationship through the HITL approach, not relying entirely on the autonomous system to complete tasks, but rather using the system to assist humans in monitoring and ensuring driving safety, capitalizing on human strengths in precise environmental perception, adaptive decision-making, and complex pattern recognition. We also found that participants of different ages responded differently to the driving experience. Experienced young wheelchair participants prefer their driving skills over autonomous tech, while older participants lean towards trusting the technology. The collaboration allows the autonomous system to undertake basic tasks, significantly reducing the operational burden on users, especially beneficial for vulnerable groups like those with severe paralysis, limited hand mobility, or the elderly. Meanwhile, users can monitor the wheelchair’s statuses and intentions projected on the ground, adjusting based on system alerts or personal experience and preferences.
Human–machine collaboration not only enhances autonomy for users, but also meets their elevated driving needs and preferences. Users can perform other activities while driving, needing only to glance occasionally at ground projections. However, user needs and preferences vary. While older participants were concerned about the potential distraction of constant ground monitoring, younger users saw it as an opportunity for phone use. Most participants acknowledge the criticality of retaining control at any time, which enhances safety perception and reflects their desire for control [7]. Such flexibility allows users to adjust their driving according to personal state, environmental features, and social context. Additionally, diverse user preferences for interface features suggest the needs for an adaptive and intelligent system. According to the research [42], customizable interaction functions should meet individualized information needs, offering adaptive support based on the system context, learning from users’ driving habits, and providing a comfortable experience tailored to their psychological state and abilities, thus fulfilling user needs while respecting personal preferences.

4.3.3. Improving Wheelchair Navigation Clarity for Pedestrian Safety

Autonomous vehicle–pedestrian interactions often rely on eHMI, which studies show improve pedestrians’ safety perception and comfort. For instance, researches indicate that pedestrian awareness of wheelchairs’ navigational intentions leads to smoother movements, contrasting with scenarios lacking communication [74]. Our Experiment 2 shows that pedestrian participants found projection interaction effective in improving reaction times and reducing risk, and it was easy to learn.
However, about 30% of participants reported anxiety when crossing roads as pedestrians, possibly attributable to the current design’s insufficient visual feedback for pedestrians. In manual phases, wheelchair users traditionally use vocal, eye contact, and gestures for clear multimodal communication. However, the turning arrow projected by the wheelchair while bypassing pedestrians, overlapping with the main navigation path on the same plane, may diminish the readability of the interface for pedestrians due to inadequate perceptual directness. This lack of clarity may lead to pedestrians overlooking the interaction, subsequently reducing the confidence of certain participants. In contrast, the wheelchair’s projected safety line offers clear and definite indications for pedestrians, enhancing the overall interaction quality. By refining these visual feedback mechanisms, can further aid in interpreting wheelchair behaviors for pedestrians, increasing interaction safety and efficiency.

4.3.4. The Novelty and Adaptability of Projection Interface Information

On-ground projection interaction, as an implementation of the proposed shared eHMI, employs a kind of augmented reality for immersive navigation. It enriches wheelchair–human interaction design by blending virtual navigation elements with real-time scenarios, ensuring practical visibility. In the user interface design, we balanced cognitive load against interface complexity, tailoring the display of information for user–system and system–pedestrian interactions to avoid information overload. Control over the display and hiding of interface components ensures information is provided when needed, eliminating unnecessary cues.
Participants’ feedback indicates that a continuous moving navigation path provides seamless guidance, the display of global compass and speed modulation, and the destination proximity, which meets users’ needs for precise navigation. In terms of interface visual design, the color scheme of the interface is selected to enhance the visibility of projections, ensuring clarity across various ground colors. We observed that the relative motion of the ground during wheelchair movement, which causes ground blur, still allows projected content to maintain a degree of clarity even in variegated ground conditions. This aspect somewhat improves the ground adaptability of the projections.
Most participants found the projection interface design intuitive, innovative, practical, and engaging. However, a minority reported high information density and difficulty in understanding, leading to cumbersome and inefficient use. Beyond the visual feedback design, we believe the main reasons for these issues are the participants’ short adaptation time to the interface and varying receptiveness to new elements. For older wheelchair users, interface designs that are more age-appropriate need to be considered.

4.3.5. Concern and Potential Evaluation of the System

The results of users’ attitude questionnaire reveal that most participants favored autonomous wheelchairs with projection interaction. However, Question 6 indicates that a subset of participants strongly perceived technological risks, leading to distrust and aversion, and showing reservations about the interaction method, possibly avoiding the product. Data analysis indicated a correlation between lower SUS scores and negative attitudes towards the technology. Some participants’ unfamiliarity with autonomous driving technology, and others’ heightened concerns about potential risks that rely on technological advancements for resolution, influenced their perception of insufficient reliability and safety in the interaction. Additionally, from our interview analysis, we also identified several internal concerns:
  • Visibility of projections and operating environment: Experienced wheelchair users often anticipate challenges new products might bring. Some participants noted insufficient visibility of projections in bright light conditions and expected improved performance at night or indoor environments. They also expressed concerns about the effectiveness of projections on certain terrains and compatibility with existing accessibility infrastructure. Some believed that on-ground projections might be hard to recognize by passing vehicles, leading to navigation difficulties and unsafe situations due to blind spots.
  • Safety risks: In practical use, the safety of projections and potential distraction issues are of concern. The performance of wheelchairs in densely populated areas and their capability to handle challenging terrains, like steps or uneven surfaces, became crucial topics for discussion regarding safety and trust. Users prioritized safety and preferred to maintain control at any time. During autonomous driving, there is a worry that constantly looking down at projections or getting distracted by other tasks might lead to overlooking environmental risks.
  • Intrusiveness of projections and user image: Wheelchair users often view their wheelchair as an extension of their body and connected to their self-image. How they interact with the outside world affects their self-perception and how others perceive them, and it is vital to avoid drawing extra attention to the user’s disability [44,52,75]. We considered non-intrusive interaction methods, such as timely appearance and disappearance of interface components and customizable projection switches, to minimize environmental space occupation while still attracting attention in emergencies. Some participants still reported that projections have attracted some attention, and adding auditory cues might cause stress due to excessive focus on the user. In fact, the use of projections in autonomous mode, compared to traditional interaction methods in manual mode, is conspicuous, and the issue of pedestrians overlooking them leading to collision risks must be addressed. Considering the addition of other modalities to build polite and friendly communication is important.
  • Potential applications: Some participants suggested innovative ideas, emphasizing broader applications of the system beyond its primary purpose. In vast environments like parks or large shopping centers, the feature can serve not only wheelchair users but also the general public as a navigational tool. Projected navigation paths on the ground clearly indicate directions, are easy to follow, and are particularly suitable for use in spacious, open spaces. Additionally, in hospitals or elderly care, the potential of using shared eHMI in autonomous wheelchairs is particularly evident. Medical practitioners can preset designated routes, allowing these wheelchairs to project their intentions during transit, helping pedestrians understand and cooperate to reduce collisions, and ultimately assisting patients or the elderly to navigate safely to specific locations within the facility.

5. Conclusions and Future Works

5.1. Conclusions

In this study, we propose an on-ground projection-based shared external human–machine interaction (eHMI) which facilitates the autonomous wheelchair in communicating its motion intentions to both the user and the surrounding pedestrians. By leveraging the innate human experience and intuition in the proposed shared eHMI, a synergistic effect with machine intelligence can enhance the smoothness and quality of wheelchair autonomous navigation. It serves to alleviate the user’s cognitive load through machine assistance while utilizing human perception for tailored route planning, achieving a more autonomous and personalized navigation experience. For pedestrians, understanding each other’s behavioral intentions can promote safe interactions in a shared space.
Guided by this approach, we developed an autonomous wheelchair simulation platform that projects motion intentions onto the ground using an interface preferred by the user. The pilot study highlighted the appeal of on-ground projections, though concerns arose pertaining to visibility, safety, trust, and practicality.
Among the participants, wheelchair users reflected that the user interface proposed in this study enhanced their perception of the intelligence and trustworthiness of the autonomous wheelchair, as well as the overall user experience. Additionally, pedestrian participants indicated that the interface improved their safety experience when interacting with users of autonomous wheelchairs. The shared eHMI, utilizing augmented reality, improved navigation visibility and accuracy. However, wheelchair users expressed a desire for further practical enhancements, and pedestrian users saw room for improvement in visual feedback. Overall, despite concerns about the reliability of autonomous wheelchair technology among wheelchair users, the integration of system status and intent visualization through projection interaction led to a supportive attitude towards autonomous wheelchairs.

5.2. Limitations

Experimental Design:
  • Experimental method: The reliance on online video simulations deprived participants of a holistic experience. Despite potential congruence in results, participants missed out on sensory experiences, such as touch and sound, which would accompany direct interaction with the autonomous driving and projection interface.
  • Brief interface adaptation time: Although information prioritization and simplification were considered in the design, the brief exposure to interface information could lead to a high cognitive burden for first-time users, especially elderly wheelchair users.
  • Incomplete scenario simulation: The experiment used an abstract general road scenario with common obstacles and pedestrians, but it overlooked specific challenges such as steps, dense crowds, and diverse road users, possibly affecting user engagement and perceptions of practicality and safety. Additionally, the simulation platform shown in videos operated slowly, impacting the views of users proficient in wheelchair use. Nevertheless, we recommend starting at a slower pace until users build confidence in the system.
Equipment Limitations:
Due to current projector brightness limitations, visual effectiveness may be compromised in brightly lit environments.

5.3. Future Works

As a pilot study, further work can build on the preliminary finding obtained.
Due to the limitations in this study, such as the video experiment and the participant group not including a substantial number of the primary target users, we plan to incorporate more elderly and upper-limb-impaired users in future research. These users have a stronger need for autonomous wheelchairs, which will help to further complete the testing for comprehensiveness. Additionally, due to the inherent limitations of projection visibility, we will expand the interaction channels and dimensions, such as incorporating vehicle body eHMI, display screens, and auditory feedback to improve the expression of driving intentions and status. Subsequent research might explore more congenial and subtle communication systems for wheelchairs to minimize pedestrian disruption. This would emphasize the interplay between the environment and wheelchair navigation, curtailing potential environmental disturbances like light pollution while ensuring visibility. Moreover, in future field studies, we plan to include tests with more complex driving scenarios to enhance the universal applicability of autonomous wheelchairs with integrated shared eHMI. We will also assess the effectiveness and comprehensibility of different user interface components to address issues more specifically.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app14010463/s1, Video S1: Shared eHMI: Bridging Human-Machine Understanding in Autonomous Wheelchair Navigation DEMO.

Author Contributions

Conceptualization, X.Z. and B.Z.; methodology, X.Z. and B.Z.; software, Z.S. and W.L.; validation, Z.S., Z.P. and W.L.; formal analysis, X.Z. and R.G.; investigation, X.Z. and Q.H.; resources, Z.S., Z.P. and Q.H.; data curation, Z.S., Z.P. and Q.H.; writing—original draft preparation, Z.S. and Q.H.; writing—review and editing, Z.S., X.Z. and B.Z.; visualization, Z.S., Q.H. and R.G.; supervision, B.Z.; project administration, X.Z. and B.Z.; funding acquisition, B.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by grants from the Humanity and Social Science Youth Foundation of the Ministry of Education of China, grant numbers 18YJCZH249, the National Natural Science Foundation of China, grant number 52008114, the Guangzhou Science and Technology Planning Project, grant number 201904010241, the Humanity Design and Engineering Research Team (263303306), and the Quality Engineering Project of Guangdong University of Technology (2022-59).

Institutional Review Board Statement

The study was approved by the Institutional Review Board of the Guangdong University of Technology.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy concerns for the participants involved in the study.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. User attitude questionnaire questions.
Table A1. User attitude questionnaire questions.
Question NumberQuestionnaire Question
Five-Point Likert Scale (1 = Strongly Disagree, 5 = Strongly Agree)
Q1I think I would enjoy riding in this smart projection wheelchair.
Q2I think the smart projection wheelchair to be more useful than any wheelchair I have used before.
Q3I trust that the smart projection wheelchair can navigate autonomously without my assistance.
Q4I would feel concerned about entrusting my safety to a smart wheelchair with projection interaction.
Q5I would like to take over control from the smart projection wheelchair (in autonomous mode) at any time.
Q6I would not use this smart projection wheelchair in the future because the technology poses potential risks.
Q7I would prefer to trust the autonomous driving technology of the wheelchair over my own driving skills with an electric wheelchair.
Table A2. Pedestrian Acceptance questionnaire questions.
Table A2. Pedestrian Acceptance questionnaire questions.
Question NumberQuestionnaire Question
Seven-Point Likert Scale (1 = Strongly Disagree, 7 = Strongly Agree)
Q1While crossing the road with autonomous wheelchairs in operation, I feel that having a projection interaction feature would enable me to react more quickly to unsafe walking conditions.
Q2While crossing the road with autonomous wheelchairs in operation, I believe that a projection interaction feature would reduce my risk of being involved in an accident.
Q3For me, learning to interact with a smart projection wheelchair while crossing the road is a straightforward task.
Q4I find it difficult to cross the road when smart projection wheelchairs are operating on the road. (reverse score)
Q5I intend to cross the road in front of smart projection wheelchairs.
Q6I plan to cross the road in front of smart projection wheelchairs.

References

  1. Sivakanthan, S.; Candiotti, J.L.; Sundaram, S.A.; Duvall, J.A.; Sergeant, J.J.G.; Cooper, R.; Satpute, S.; Turner, R.L.; Cooper, R.A. Mini-Review: Robotic Wheelchair Taxonomy and Readiness. Neurosci. Lett. 2022, 772, 136482. [Google Scholar] [CrossRef] [PubMed]
  2. Ryu, H.-Y.; Kwon, J.-S.; Lim, J.-H.; Kim, A.-H.; Baek, S.-J.; Kim, J.-W. Development of an Autonomous Driving Smart Wheelchair for the Physically Weak. Appl. Sci. 2021, 12, 377. [Google Scholar] [CrossRef]
  3. Megalingam, R.K.; Rajendraprasad, A.; Raj, A.; Raghavan, D.; Teja, C.R.; Sreekanth, S.; Sankaran, R. Self-E: A Self-Driving Wheelchair for Elders and Physically Challenged. Int. J. Intell. Robot. Appl. 2021, 5, 477–493. [Google Scholar] [CrossRef]
  4. Grewal, H.; Matthews, A.; Tea, R.; George, K. LIDAR-Based Autonomous Wheelchair. In Proceedings of the 2017 IEEE Sensors Applications Symposium (SAS), Glassboro, NJ, USA, 13–15 March 2017; pp. 1–6. [Google Scholar]
  5. Alkhatib, R.; Swaidan, A.; Marzouk, J.; Sabbah, M.; Berjaoui, S.; Diab, M.O. Smart Autonomous Wheelchair. In Proceedings of the 2019 3rd International Conference on Bio-engineering for Smart Technologies (BioSMART), Paris, France, 24–26 April 2019; pp. 1–5. [Google Scholar]
  6. Lim, D.; Kim, B. UI Design of eHMI of Autonomous Vehicles. Int. J. Hum. Comput. Interact. 2022, 38, 1944–1961. [Google Scholar] [CrossRef]
  7. Jang, J.; Li, Y.; Carrington, P. “I Should Feel Like I’m In Control”: Understanding Expectations, Concerns, and Motivations for the Use of Autonomous Navigation on Wheelchairs. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility, Athens, Greece, 23 October 2022; pp. 1–5. [Google Scholar]
  8. Li, J.; He, Y.; Yin, S.; Liu, L. Effects of Automation Transparency on Trust: Evaluating HMI in the Context of Fully Autonomous Driving. In Proceedings of the 15th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Ingolstadt, Germany, 18 September 2023; pp. 311–321. [Google Scholar]
  9. Lyons, J.B.; Havig, P.R. Transparency in a Human-Machine Context: Approaches for Fostering Shared Awareness/Intent. In Virtual, Augmented and Mixed Reality: Designing and Developing Virtual and Augmented Environments—Proceedings of the 6th International Conference, VAMR 2014, Heraklion, Greece, 22–27 June 2014; Shumaker, R., Lackey, S., Eds.; Springer: Cham, Switzerland, 2014; pp. 181–190. [Google Scholar]
  10. Utaminingrum, F.; Mayena, S.; Karim, C.; Wahyudi, S.; Huda, F.A.; Lin, C.-Y.; Shih, T.K.; Thaipisutikul, T. Road Surface Detection for Autonomous Smart Wheelchair. In Proceedings of the 2022 5th World Symposium on Communication Engineering (WSCE), Nagoya, Japan, 16 September 2022; pp. 69–73. [Google Scholar]
  11. Zang, G.; Azouigui, S.; Saudrais, S.; Hebert, M.; Goncalves, W. Evaluating the Understandability of Light Patterns and Pictograms for Autonomous Vehicle-to-Pedestrian Communication Functions. IEEE Trans. Intell. Transport. Syst. 2022, 23, 18668–18680. [Google Scholar] [CrossRef]
  12. Wu, C.F.; Xu, D.D.; Lu, S.H.; Chen, W.C. Effect of Signal Design of Autonomous Vehicle Intention Presentation on Pedestrians’ Cognition. Behav. Sci. 2022, 12, 502. [Google Scholar] [CrossRef] [PubMed]
  13. Carmona, J.; Guindel, C.; Garcia, F.; de la Escalera, A. eHMI: Review and Guidelines for Deployment on Autonomous Vehicles. Sensors 2021, 21, 2912. [Google Scholar] [CrossRef]
  14. Papakostopoulos, V.; Nathanael, D.; Portouli, E.; Amditis, A. Effect of External HMI for Automated Vehicles (AVs) on Drivers’ Ability to Infer the AV Motion Intention: A Field Experiment. Transp. Res. Part F Traffic Psychol. Behav. 2021, 82, 32–42. [Google Scholar] [CrossRef]
  15. Rettenmaier, M.; Albers, D.; Bengler, K. After You?!—Use of External Human-Machine Interfaces in Road Bottleneck Scenarios. Transp. Res. Part F Traffic Psychol. Behav. 2020, 70, 175–190. [Google Scholar] [CrossRef]
  16. Faas, S.M.; Mathis, L.-A.; Baumann, M. External HMI for Self-Driving Vehicles: Which Information Shall Be Displayed? Transp. Res. Part F Traffic Psychol. Behav. 2020, 68, 171–186. [Google Scholar] [CrossRef]
  17. Jiang, Q.; Zhuang, X.; Ma, G. Evaluation of external HMI in autonomous vehicles based on pedestrian road crossing decision-making model. Adv. Psychol. Sci. 2021, 29, 1979–1992. [Google Scholar] [CrossRef]
  18. Eisma, Y.B.; Van Bergen, S.; Ter Brake, S.M.; Hensen, M.T.T.; Tempelaar, W.J.; De Winter, J.C.F. External Human–Machine Interfaces: The Effect of Display Location on Crossing Intentions and Eye Movements. Information 2019, 11, 13. [Google Scholar] [CrossRef]
  19. Othersen, I.; Conti-Kufner, A.S.; Dietrich, A.; Maruhn, P.; Bengler, K. Designing for Automated Vehicle and Pedestrian Communication: Perspectives on eHMIs from Older and Younger Persons. In Proceedings of the HFES Europe Annual Meeting, Berlin, Germany, 3–8 October 2018. [Google Scholar]
  20. Bazilinskyy, P.; Dodou, D.; De Winter, J. Survey on eHMI Concepts: The Effect of Text, Color, and Perspective. Transp. Res. Part F Traffic Psychol. Behav. 2019, 67, 175–194. [Google Scholar] [CrossRef]
  21. Löcken, A.; Golling, C.; Riener, A. How Should Automated Vehicles Interact with Pedestrians? A Comparative Analysis of Interaction Concepts in Virtual Reality. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Utrecht, The Netherlands, 21 September 2019; pp. 262–274. [Google Scholar]
  22. Dou, J.; Chen, S.; Tang, Z.; Xu, C.; Xue, C. Evaluation of Multimodal External Human–Machine Interface for Driverless Vehicles in Virtual Reality. Symmetry 2021, 13, 687. [Google Scholar] [CrossRef]
  23. Whee, A.; Willrodt, J.-H.; Wagner, K.; Bengler, K. Path Planning of a Multifunctional Elderly Intelligent. Projection-Based External Human Machine Interfaces—Enabling Interaction between Automated Vehicles and Pedestrians. In Proceedings of the Driving Simulation Conference 2018 Europe VR, Driving Simulation Association, Antibes, France, 5 September 2018; pp. 43–50. [Google Scholar]
  24. Tabone, W.; Lee, Y.M.; Merat, N.; Happee, R.; de Winter, J. Towards Future Pedestrian-Vehicle Interactions: Introducing Theoretically-Supported AR Prototypes. In Proceedings of the 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Association for Computing Machinery, New York, NY, USA, 20 September 2021; pp. 209–218. [Google Scholar]
  25. Zolotas, M.; Demiris, Y. Towards Explainable Shared Control Using Augmented Reality. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 4–8 November 2019; pp. 3020–3026. [Google Scholar]
  26. Zolotas, M.; Elsdon, J.; Demiris, Y. Head-Mounted Augmented Reality for Explainable Robotic Wheelchair Assistance. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1823–1829. [Google Scholar]
  27. De Clercq, K.; Dietrich, A.; Núñez Velasco, J.P.; De Winter, J.; Happee, R. External Human-Machine Interfaces on Automated Vehicles: Effects on Pedestrian Crossing Decisions. Hum. Factors 2019, 61, 1353–1370. [Google Scholar] [CrossRef] [PubMed]
  28. Colley, A.; Häkkilä, J.; Forsman, M.-T.; Pfleging, B.; Alt, F. Car Exterior Surface Displays: Exploration in a Real-World Context. In Proceedings of the 7th ACM International Symposium on Pervasive Displays, Munich, Germany, 6 June 2018; pp. 1–8. [Google Scholar]
  29. Wang, B.J.; Yang, C.H.; Gu, Z.Y. Smart Flashlight: Navigation Support for Cyclists. In Design, User Experience, and Usability: Users, Contexts and Case Studies—Proceedings of the 7th International Conference, DUXU 2018, Las Vegas, NV, USA, 15–20 July 2018; Marcus, A., Wang, W., Eds.; Springer: Cham, Switzerland, 2018; pp. 406–414. [Google Scholar]
  30. Kim, B.; Pineau, J. Socially Adaptive Path Planning in Human Environments Using Inverse Reinforcement Learning. Int. J. Soc. Robot. 2016, 8, 51–66. [Google Scholar] [CrossRef]
  31. Ezeh, C.; Trautman, P.; Devigne, L.; Bureau, V.; Babel, M.; Carlson, T. Probabilistic vs Linear Blending Approaches to Shared Control for Wheelchair Driving. In Proceedings of the 2017 International Conference on Rehabilitation Robotics (ICORR), London, UK, 17–20 July 2017; pp. 835–840. [Google Scholar]
  32. Simpson, R.; LoPresti, E.; Hayashi, S.; Nourbakhsh, I.; Miller, D. The Smart Wheelchair Component System. J. Rehabil. Res. Dev. 2004, 41, 429–442. [Google Scholar] [CrossRef]
  33. Baltazar, A.R.; Petry, M.R.; Silva, M.F.; Moreira, A.P. Autonomous Wheelchair for Patient’s Transportation on Healthcare Institutions. SN Appl. Sci. 2021, 3, 354. [Google Scholar] [CrossRef]
  34. Thuan Nguyen, V.; Sentouh, C.; Pudlo, P.; Popieul, J.-C. Joystick Haptic Force Feedback for Powered Wheelchair—A Model-Based Shared Control Approach. In Proceedings of the 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, ON, Canada, 11 October 2020; pp. 4453–4459. [Google Scholar]
  35. Viswanathan, P.; Zambalde, E.P.; Foley, G.; Graham, J.L.; Wang, R.H.; Adhikari, B.; Mackworth, A.K.; Mihailidis, A.; Miller, W.C.; Mitchell, I.M. Intelligent Wheelchair Control Strategies for Older Adults with Cognitive Impairment: User Attitudes, Needs, and Preferences. Auton. Robot. 2017, 41, 539–554. [Google Scholar] [CrossRef]
  36. Wang, Y.; Hespanhol, L.; Tomitsch, M. How Can Autonomous Vehicles Convey Emotions to Pedestrians? A Review of Emotionally Expressive Non-Humanoid Robots. Multimodal. Technol. Interact. 2021, 5, 84. [Google Scholar] [CrossRef]
  37. Carlson, T.; Demiris, Y. Human-Wheelchair Collaboration through Prediction of Intention and Adaptive Assistance. In Proceedings of the 2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008; pp. 3926–3931. [Google Scholar]
  38. Wang, W.; Na, X.; Cao, D.; Gong, J.; Xi, J.; Xing, Y.; Wang, F.-Y. Decision-Making in Driver-Automation Shared Control: A Review and Perspectives. IEEE/CAA J. Autom. Sin. 2020, 7, 1289–1307. [Google Scholar] [CrossRef]
  39. Xi, L.; Shino, M. Shared Control of an Electric Wheelchair Considering Physical Functions and Driving Motivation. Int. J. Environ. Res. Public Health 2020, 17, 5502. [Google Scholar] [CrossRef] [PubMed]
  40. Escobedo, A.; Spalanzani, A.; Laugier, C. Multimodal Control of a Robotic Wheelchair: Using Contextual Information for Usability Improvement. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 4262–4267. [Google Scholar]
  41. Li, Q.; Chen, W.; Wang, J. Dynamic Shared Control for Human-Wheelchair Cooperation. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 4278–4283. [Google Scholar]
  42. Zhang, B.; Barbareschi, G.; Ramirez Herrera, R.; Carlson, T.; Holloway, C. Understanding Interactions for Smart Wheelchair Navigation in Crowds. In Proceedings of the CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April 2022; pp. 1–16. [Google Scholar]
  43. Anwer, S.; Waris, A.; Sultan, H.; Butt, S.I.; Zafar, M.H.; Sarwar, M.; Niazi, I.K.; Shafique, M.; Pujari, A.N. Eye and Voice-Controlled Human Machine Interface System for Wheelchairs Using Image Gradient Approach. Sensors 2020, 20, 5510. [Google Scholar] [CrossRef] [PubMed]
  44. Barbareschi, G.; Daymond, S.; Honeywill, J.; Singh, A.; Noble, D.N.; Mbugua, N.; Harris, I.; Austin, V.; Holloway, C. Value beyond Function: Analyzing the Perception of Wheelchair Innovations in Kenya. In Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility, Virtual Event, 26 October 2020; pp. 1–14. [Google Scholar]
  45. Kaye, S.-A.; Li, X.; Oviedo-Trespalacios, O.; Pooyan Afghari, A. Getting in the Path of the Robot: Pedestrians Acceptance of Crossing Roads near Fully Automated Vehicles. Travel Behav. Soc. 2022, 26, 1–8. [Google Scholar] [CrossRef]
  46. Liu, H.; Hirayama, T.; Morales Saiki, L.Y.; Murase, H. Implicit Interaction with an Autonomous Personal Mobility Vehicle: Relations of Pedestrians’ Gaze Behavior with Situation Awareness and Perceived Risks. Int. J. Hum. Comput. Interact. 2023, 39, 2016–2032. [Google Scholar] [CrossRef]
  47. Kong, J.; Li, P. Path Planning of a Multifunctional Elderly Intelligent Wheelchair Based on the Sensor and Fuzzy Bayesian Network Algorithm. J. Sens. 2022, 2022, e8485644. [Google Scholar] [CrossRef]
  48. Epke, M.R.; Kooijman, L.; de Winter, J.C.F. I See Your Gesture: A VR-Based Study of Bidirectional Communication between Pedestrians and Automated Vehicles. J. Adv. Transp. 2021, 2021, e5573560. [Google Scholar] [CrossRef]
  49. Watanabe, A.; Ikeda, T.; Morales, Y.; Shinozawa, K.; Miyashita, T.; Hagita, N. Communicating Robotic Navigational Intentions. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 31 October 2015. [Google Scholar]
  50. Shrestha, M.C.; Onishi, T.; Kobayashi, A.; Kamezaki, M.; Sugano, S. Communicating Directional Intent in Robot Navigation Using Projection Indicators. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; pp. 746–751. [Google Scholar]
  51. Othman, K. Public Acceptance and Perception of Autonomous Vehicles: A Comprehensive Review. AI Ethics 2021, 1, 355–387. [Google Scholar] [CrossRef]
  52. Barbareschi, G.; Carew, M.T.; Johnson, E.A.; Kopi, N.; Holloway, C. “When They See a Wheelchair, They’ve Not Even Seen Me”—Factors Shaping the Experience of Disability Stigma and Discrimination in Kenya. Int. J. Environ. Res. Public Health 2021, 18, 4272. [Google Scholar] [CrossRef]
  53. Asha, A.Z.; Smith, C.; Freeman, G.; Crump, S.; Somanath, S.; Oehlberg, L.; Sharlin, E. Co-Designing Interactions between Pedestrians in Wheelchairs and Autonomous Vehicles. In Proceedings of the 2021 ACM Designing Interactive Systems Conference, Virtual Event, 28 June–2 July 2021; pp. 339–351. [Google Scholar]
  54. Rateke, T.; von Wangenheim, A. Road Surface Detection and Differentiation Considering Surface Damages. Auton. Robot. 2021, 45, 299–312. [Google Scholar] [CrossRef]
  55. Salvini, P.; Reinmund, T.; Hardin, B.; Grieman, K.; Ten Holter, C.; Johnson, A.; Kunze, L.; Winfield, A.; Jirotka, M. Human Involvement in Autonomous Decision-Making Systems. Lessons Learned from Three Case Studies in Aviation, Social Care and Road Vehicles. Front. Political Sci. 2023, 5, 1238461. [Google Scholar] [CrossRef]
  56. Abraham, S.; Carmichael, Z.; Banerjee, S.; VidalMata, R.; Agrawal, A.; Al Islam, M.N.; Scheirer, W.; Cleland-Huang, J. Adaptive Autonomy in Human-on-the-Loop Vision-Based Robotics Systems. In Proceedings of the 2021 IEEE/ACM 1st Workshop on AI Engineering—Software Engineering for AI (WAIN), Madrid, Spain, 30–31 May 2021; pp. 113–120. [Google Scholar]
  57. Wu, J.; Huang, Z.; Hu, Z.; Lv, C. Toward Human-in-the-Loop AI: Enhancing Deep Reinforcement Learning via Real-Time Human Guidance for Autonomous Driving. Engineering 2023, 21, 75–91. [Google Scholar] [CrossRef]
  58. Summerfield, C.; Egner, T. Attention and Decision-Making. In The Oxford Handbook of Attention; Nobre, A.C., Kastner, S., Eds.; Oxford University Press: Oxford, UK, 2014; ISBN 978-0-19-967511-1. [Google Scholar]
  59. Methnani, L.; Aler Tubella, A.; Dignum, V.; Theodorou, A. Let Me Take Over: Variable Autonomy for Meaningful Human Control. Front. Artif. Intell. 2021, 4, 737072. [Google Scholar] [CrossRef]
  60. Wu, X.; Xiao, L.; Sun, Y.; Zhang, J.; Ma, T.; He, L. A Survey of Human-in-the-Loop for Machine Learning. Future Gener. Comput. Syst. 2022, 135, 364–381. [Google Scholar] [CrossRef]
  61. Nielsen, J. Usability Inspection Methods. In Proceedings of the Conference Companion on Human Factors in Computing Systems, Boston, MA, USA, 24–28 April 1994; pp. 413–414. [Google Scholar]
  62. NN/g Nielsen Norman Group. Visibility of System Status. Available online: https://www.nngroup.com/articles/visibility-system-status/ (accessed on 27 December 2023).
  63. Detjen, H.; Salini, M.; Kronenberger, J.; Geisler, S.; Schneegass, S. Towards Transparent Behavior of Automated Vehicles: Design and Evaluation of HUD Concepts to Support System Predictability through Motion Intent Communication. In Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction, Toulouse, France, 27 September 2021; pp. 1–12. [Google Scholar]
  64. Dancu, A.; Vechev, V.; Ünlüer, A.A.; Nilson, S.; Nygren, O.; Eliasson, S.; Barjonet, J.-E.; Marshall, J.; Fjeld, M. Gesture Bike: Examining Projection Surfaces and Turn Signal Systems for Urban Cycling. In Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces—ITS ’15, Madeira, Portugal, 15–18 November 2015; pp. 151–159. [Google Scholar]
  65. Nguyen, T.T.; Holländer, K.; Hoggenmueller, M.; Parker, C.; Tomitsch, M. Designing for Projection-Based Communication between Autonomous Vehicles and Pedestrians. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Utrecht, The Netherlands, 21 September 2019; pp. 284–294. [Google Scholar]
  66. Dey, D.; Habibovic, A.; Löcken, A.; Wintersberger, P.; Pfleging, B.; Riener, A.; Martens, M.; Terken, J. Taming the eHMI Jungle: A Classification Taxonomy to Guide, Compare, and Assess the Design Principles of Automated Vehicles’ External Human-Machine Interfaces. Transp. Res. Interdiscip. Perspect. 2020, 7, 100174. [Google Scholar] [CrossRef]
  67. Plikynas, D.; Žvironas, A.; Budrionis, A.; Gudauskis, M. Indoor Navigation Systems for Visually Impaired Persons: Mapping the Features of Existing Technologies to User Needs. Sensors 2020, 20, 636. [Google Scholar] [CrossRef]
  68. Zhao, X.; Li, X.; Rakotonirainy, A.; Bourgeois-Bougrine, S.; Gruyer, D.; Delhomme, P. The ‘Invisible Gorilla’ during Pedestrian-AV Interaction: Effects of Secondary Tasks on Pedestrians’ Reaction to eHMIs. Accid. Anal. Prev. 2023, 192, 107246. [Google Scholar] [CrossRef]
  69. Dey, D.; Martens, M.; Eggen, B.; Terken, J. Pedestrian Road-Crossing Willingness as a Function of Vehicle Automation, External Appearance, and Driving Behaviour. Transp. Res. Part F Traffic Psychol. Behav. 2019, 65, 191–205. [Google Scholar] [CrossRef]
  70. Brooke, J. SUS—A Quick and Dirty Usability Scale. In Usability Evaluation in Industry; Taylor & Francis: Abingdon, UK, 1996; pp. 189–194. [Google Scholar]
  71. Schrepp, M.; Hinderks, A.; Thomaschewski, J. Design and Evaluation of a Short Version of the User Experience Questionnaire (UEQ-S). Int. J. Interact. Multimed. Artif. Intell. 2017, 4, 103. [Google Scholar] [CrossRef]
  72. Rahman, M.M.; Lesch, M.F.; Horrey, W.J.; Strawderman, L. Assessing the Utility of TAM, TPB, and UTAUT for Advanced Driver Assistance Systems. Accid. Anal. Prev. 2017, 108, 361–373. [Google Scholar] [CrossRef]
  73. Buckley, L.; Kaye, S.-A.; Pradhan, A.K. Psychosocial Factors Associated with Intended Use of Automated Vehicles: A Simulated Driving Study. Accid. Anal. Prev. 2018, 115, 202–208. [Google Scholar] [CrossRef] [PubMed]
  74. Morales, Y.; Watanabe, A.; Ferreri, F.; Even, J.; Ikeda, T.; Shinozawa, K.; Miyashita, T.; Hagita, N. Including Human Factors for Planning Comfortable Paths. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 6153–6159. [Google Scholar]
  75. Faucett, H.A.; Ringland, K.E.; Cullen, A.L.L.; Hayes, G.R. (In)Visibility in Disability and Assistive Technology. ACM Trans. Access. Comput. 2017, 10, 1–17. [Google Scholar] [CrossRef]
Figure 1. The limitations of autonomous driving sensors in discerning terrain nuances and social scenarios require user intervention for accurate distinction.
Figure 1. The limitations of autonomous driving sensors in discerning terrain nuances and social scenarios require user intervention for accurate distinction.
Applsci 14 00463 g001
Figure 2. The design scenario of an on-ground projection interaction for autonomous wheelchairs is used for visualizing imminent paths, yielding to pedestrians, and social interaction, and users can resume control of the wheelchairs at any time.
Figure 2. The design scenario of an on-ground projection interaction for autonomous wheelchairs is used for visualizing imminent paths, yielding to pedestrians, and social interaction, and users can resume control of the wheelchairs at any time.
Applsci 14 00463 g002
Figure 3. User interface and interaction design for wheelchair projections. (a) The main interface; (b) the current state when heading away from the navigation direction; (c) the turning state; and (d) the state of yield to pedestrians.
Figure 3. User interface and interaction design for wheelchair projections. (a) The main interface; (b) the current state when heading away from the navigation direction; (c) the turning state; and (d) the state of yield to pedestrians.
Applsci 14 00463 g003
Figure 4. Schematic illustration of the practical working scenarios of the shared eHMI, including the projection interaction effects presented from both a scene perspective and a first-person viewpoint. Additionally, the design of the user control panel is showcased in the figure. This panel is equipped with functional buttons for controlling the behavior of the autonomous wheelchair and adjusting projection interaction information. In the experiment, the user–system interaction method is demonstrated in the form of a virtual control panel.
Figure 4. Schematic illustration of the practical working scenarios of the shared eHMI, including the projection interaction effects presented from both a scene perspective and a first-person viewpoint. Additionally, the design of the user control panel is showcased in the figure. This panel is equipped with functional buttons for controlling the behavior of the autonomous wheelchair and adjusting projection interaction information. In the experiment, the user–system interaction method is demonstrated in the form of a virtual control panel.
Applsci 14 00463 g004
Figure 5. Construction of the autonomous wheelchair simulation platform, illustrating the hardware components utilized.
Figure 5. Construction of the autonomous wheelchair simulation platform, illustrating the hardware components utilized.
Applsci 14 00463 g005
Figure 6. Two experimental routes. Route 1 (A) includes basic wheelchair behaviors and obstacle avoidance behaviors. Route 2 (B) involves interaction with pedestrians.
Figure 6. Two experimental routes. Route 1 (A) includes basic wheelchair behaviors and obstacle avoidance behaviors. Route 2 (B) involves interaction with pedestrians.
Applsci 14 00463 g006
Figure 7. SUS scores for each participant.
Figure 7. SUS scores for each participant.
Applsci 14 00463 g007
Figure 8. Response to the questionnaire about users’ attitude towards our wheelchair. The bar heights indicate the number of answers per category. Higher scores indicate greater acceptance of the wheelchair, with the exception of Q4. Detailed questions of the questionnaire are shown in Table A1.
Figure 8. Response to the questionnaire about users’ attitude towards our wheelchair. The bar heights indicate the number of answers per category. Higher scores indicate greater acceptance of the wheelchair, with the exception of Q4. Detailed questions of the questionnaire are shown in Table A1.
Applsci 14 00463 g008
Figure 9. Response to the questionnaire about pedestrian acceptance. The bar heights indicate the number of answers per category. Detailed questions of the questionnaire are shown in Table A2.
Figure 9. Response to the questionnaire about pedestrian acceptance. The bar heights indicate the number of answers per category. Detailed questions of the questionnaire are shown in Table A2.
Applsci 14 00463 g009
Table 1. Design of 17 scenarios for autonomous wheelchairs.
Table 1. Design of 17 scenarios for autonomous wheelchairs.
Use CasesScenariosUI ElementsUI Designs
Wheelchair’s status for userThe wheelchair is in autonomous mode.Mode status barAuto mode activation
The wheelchair is in manual control mode.Mode status barManual mode activation
The wheelchair is functioning properly.Dynamic breathing circleBreathing circle animation
The wheelchair is malfunctioning.Dynamic promptFault alert
The wheelchair is charging.Battery displayBattery percentage and visualization
The wheelchair is on standby.Dynamic breathing circleDisplay breathing circle independently
Current navigation process of the wheelchairInformation componentDate, time, destination proximity
Wheelchair’s intentThe wheelchair is going to start moving.Navigation path displayNavigation path fade-in animation
The wheelchair is driving.Speed dialDriving speed, speed gear, compass
Navigation path displayNavigation path moving animation
The wheelchair is slowing down or stopping.Navigation path displayNavigation path slow or stop animation
The wheelchair is adjusting yaw.Yaw arrowYaw arrow rotating Around breathing circle Towards target direction
The wheelchair is turning.Navigation path displayNavigation path bending Towards target direction
Dynamic promptTurning direction
The wheelchair is avoiding obstacles.Dynamic promptTurning direction, obstacle detection
Attention! Poor road conditions ahead (real-time situation).Dynamic promptTraffic condition alert
Wheelchair–pedestrian interactionWhether the wheelchair has detected the person.Dynamic promptTurning direction, pedestrian detection
The wheelchair will bypass the pedestrian.Turning arrowText and arrow blinking
The wheelchair will yield to the pedestrian.Safety linePop-up safety distance line to yield to pedestrians
Table 2. Participants’ demographic for the study.
Table 2. Participants’ demographic for the study.
Participant IDAgeGenderWheelchair Usage DurationEducation
P127MaleOver 1 yearJunior college
P230FemaleWithin 1 yearBachelor’s degree
P361FemaleOver 1 yearJunior college
P428FemaleOver 1 yearBachelor’s degree
P535MaleOver 1 yearBachelor’s degree
P633FemaleOver 1 yearBachelor’s degree
P725MaleOver 1 yearBachelor’s degree
P882MaleOver 1 yearMiddle school
P952Male1 monthJunior college
P1029MaleOver 1 yearJunior college
P1139FemaleOver 1 yearBachelor’s degree
P1223FemaleOver 1 yearPostgraduate
P1333MaleOver 1 yearJunior college
P1428MaleOver 1 yearMiddle school
P1535MaleOver 1 yearBachelor’s degree
P1630MaleOver 1 yearMiddle school
P1735Female1 monthPostgraduate
P1838FemaleOver 1 yearMiddle school
P1930FemaleOver 1 yearBachelor’s degree
P2028FemaleOver 1 yearMiddle school
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, X.; Song, Z.; Huang, Q.; Pan, Z.; Li, W.; Gong, R.; Zhao, B. Shared eHMI: Bridging Human–Machine Understanding in Autonomous Wheelchair Navigation. Appl. Sci. 2024, 14, 463. https://doi.org/10.3390/app14010463

AMA Style

Zhang X, Song Z, Huang Q, Pan Z, Li W, Gong R, Zhao B. Shared eHMI: Bridging Human–Machine Understanding in Autonomous Wheelchair Navigation. Applied Sciences. 2024; 14(1):463. https://doi.org/10.3390/app14010463

Chicago/Turabian Style

Zhang, Xiaochen, Ziyang Song, Qianbo Huang, Ziyi Pan, Wujing Li, Ruining Gong, and Bi Zhao. 2024. "Shared eHMI: Bridging Human–Machine Understanding in Autonomous Wheelchair Navigation" Applied Sciences 14, no. 1: 463. https://doi.org/10.3390/app14010463

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop