Next Article in Journal
Alkaline Fractionation and Subsequent Production of Nano-Structured Silica and Cellulose Nano-Fibrils for the Comprehensive Utilization of Rice Husk
Next Article in Special Issue
Distractive Tasks and the Influence of Driver Attributes
Previous Article in Journal
Student Long-Term Perception of Project-Based Learning in Civil Engineering Education: An 18-Year Ex-Post Assessment
Previous Article in Special Issue
Joint Optimization of Intersection Control and Trajectory Planning Accounting for Pedestrians in a Connected and Automated Vehicle Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Drivers’ Age and Automated Vehicle Explanations

by
Qiaoning Zhang
1,*,
Xi Jessie Yang
2 and
Lionel P. Robert, Jr.
1
1
School of Information, University of Michigan, Ann Arbor, MI 48109, USA
2
Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109, USA
*
Author to whom correspondence should be addressed.
Sustainability 2021, 13(4), 1948; https://doi.org/10.3390/su13041948
Submission received: 31 December 2020 / Revised: 3 February 2021 / Accepted: 5 February 2021 / Published: 11 February 2021

Abstract

:
Automated vehicles (AV) have the potential to benefit our society. Providing explanations is one approach to facilitating AV trust by decreasing uncertainty about automated decision-making. However, it is not clear whether explanations are equally beneficial for drivers across age groups in terms of trust and anxiety. To examine this, we conducted a mixed-design experiment with 40 participants divided into three age groups (i.e., younger, middle-age, and older). Participants were presented with: (1) no explanation, or (2) explanation given before or (3) after the AV took action, or (4) explanation along with a request for permission to take action. Results highlight both commonalities and differences between age groups. These results have important implications in designing AV explanations and promoting trust.

1. Introduction

Automated vehicles (AVs) have the potential to benefit our society in part because Americans outlive their ability to drive safely by an average of 7–10 years [1,2,3]. For many, aging can correspond with greater difficulty in driving, and according to the US Census Bureau, approximately 70 million individuals in the United States will be over the age of 64 by 2030 (Figure 1) [4]. This explains why AVs are suggested as one potential solution, but a lack of trust hinders that adoption across all age groups [5,6]. The Society of Automotive Engineers (SAE) classifies driving automation into six levels ranging from 0 to 5, as shown in Table 1. As the levels increase from 0 to 5, the need for driver involvement decreases. At SAE level 3, the human driver still has to intervene when asked to do so by the automated driving system, whereas at SAE levels 4 and 5, the automated driving system takes full responsibility for all of the driving tasks in some and all circumstances, respectively [7]. In this study, AVs refers to SAE level 4 and higher vehicles.
AV explanations are one approach to promoting trust in AVs, but driver age might undermine its impact. Explanations—reasoning or logic behind actions—have been shown to facilitate trust in automation [8]. Explanations reduce anxiety about the actions taken by automation [9,10,11]. Despite receiving little attention, age is likely to be important in determining the effectiveness of AV explanations. Aging often corresponds with greater difficulty in driving. Older drivers (55 and older) have been shown to be slower to respond at signal lights, have more difficulty in judging visuospatial relations, and be more prone to accidents at moderate to high speeds compared to those in younger (18–24 years) and middle-aged groups (25–54 years) [12,13,14]. This has been attributed, at least in part, to their decrements in cognitive (e.g., cognitive processing speed, sustained attention), psychomotor (e.g., manual dexterity), and perceptual abilities [15]. However, little work has been done to understand the relationship between age and AV explanations.
To address this issue, we sought to understand the influence of the driver’s age on the impacts of AV explanations on the driver’s trust, effort, and anxiety. We conducted a mixed-design experiment with 40 adults in three age groups (i.e., younger, middle-aged, and older). Participants were presented with an AV that (1) gave no explanation, (2) gave an explanation before or (3) after the AV took action, or (4) gave an explanation along with a request for permission to take action. The results reveal that the driver’s age is indeed vital to understanding when AV explanations promote trust and reduce effort and anxiety.
This study provides several contributions to the literature. First, we demonstrated the importance of the driver’s age on the ability of AV explanations to promote trust and reduce effort and anxiety. Second, in doing so, we answered numerous calls for the development of more inclusive artificial intelligence (AI) systems [16,17]. These calls highlighted the problems of AI bias. In this paper we define AI bias as the underlying assumption that an AI system built for one subgroup is good for all groups. Finally, we provide design recommendations that are likely to help reduce age-based biases in AVs.
The rest of this paper is organized as follows. Section 2 presents the background for the work, and Section 3 illustrates the present study and hypothesis development. Section 4 describes the method, and the results are presented in Section 5 and discussed in Section 6. The conclusion of this paper is presented in Section 7.

2. Background

Driving automation includes Advanced Driver Assistance Systems (ADAS) and Automated Driving Systems (ADS) [18]. In this paper, the distinction between ADAS and ADS is based on the SAE’s taxonomy. According to the SAE, ADAS are represented by levels 1 and 2. ADAS are systems that assist humans with driving by performing some aspect of the driving (i.e., steering or braking/accelerating). ADS are represented by levels 3, 4, and 5. ADS are capable of driving with various degrees of human supervisor/intervention [7].
Explanations are reasons that underlie why an action was or should be taken [8,16]. Explanations have been used to support a range of automation, such as automated decision aid, driving automation, and recommender systems. Explanations reduce surprise and concerns about the actions taken by the automation, and facilitate trust in that automation [19,20,21]. For example, explanations in the human–automation interaction interface design promote users’ trust and acceptance [22].
Research examining AVs has also demonstrated that explanations can promote AV trust and reduce negative emotional reactions [8,9,10,11,23]. For example, researchers found that providing explanations about AV actions led to the highest level of positive emotional valence and AV acceptance compared to a no-explanation condition [9]. Additionally, people gave a higher rating to an interface that had speech output to explain the AV action in terms of its usability, anthropomorphism, and trust compared to interfaces that did not explain the AV actions [11].
Prior research also looked deeper to examine the timing of the AV explanation and the degree of AV autonomy, which could help us understand why or when AV explanations are likely to be effective at promoting trust and reducing anxiety. An AV that provides an explanation before rather than after it takes action reduces the uncertainty associated with its action [8]. This reduction in uncertainty increases trust and decreases anxiety. Although providing an explanation after an AV takes action allows drivers to know the reasons behind the action and increases their understanding of the system, it cannot necessarily increase trust in the AV because of the lack of an alert and sense of control [10,24,25].
Other scholars suggest that the degree of autonomy by the AV might also influence the effectiveness of its explanations [26,27]. AV explanations might be more effective when the AV provides them and seeks approval from the driver to take action. Handing over driving control is one of the barriers to human drivers trusting and accepting advanced driving systems, including AVs. A loss of driving control is always associated with a sense of worry [28]. With a lower degree of AV autonomy, which asks drivers for permission to act, a higher level of control can be endowed. In one study, providing the driver with an explanation along with the option to approve or disapprove the AV action did not promote more trust and lower anxiety any more than just providing the explanation [8]. As such, there is little evidence to support the potential benefits of lower autonomy.
In summary, previous literature provides some guidance on how AV explanations can influence drivers. First, AV explanations are most effective when provided before an AV acts. At the same time, AV explanations are the least effective when provided after an AV acts. Finally, the AV’s level of autonomy has little impact on the effectiveness of its explanations. However, the literature offers little insight into the role of the driver’s age in the effectiveness of AV explanations.

3. Hypotheses Development

The literature on driver’s age and driving automation has found differences among age groups in several areas. First, older drivers vary greatly when it comes to being more or less comfortable with giving the AV control over the driving. Generally, younger drivers feel more comfortable giving up driving control to the driving automation. This was highlighted in a recent survey with 2954 participants [29]. The survey found that younger drivers were more comfortable with letting the vehicle drive itself compared to older drivers [26]. One reason often given is that younger drivers are more likely to have been exposed to driving automation, which is a strong predictor of whether a driver will be comfortable relinquishing control [30]. At the other end of the spectrum, older drivers have been shown to be less comfortable with giving up driving control [29,31]. For example, a recent study using a driving simulator found that older drivers prefer to retain some degree of driving control instead of giving it all up [31].
Second, the degree of trust afforded to driving automation has also been shown to vary by age group. Trust has been repeatedly shown to be one of the most important factors that influence people’s willingness to use driving automation [32,33,34,35]. Trust refers to “the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party” [36]. Based on the literature, the degree of trust in driving automation varies greatly by the driver’s age. Younger drivers have shown higher trust in the ADS compared to other age groups [37]. Middle-aged drivers tend to be hesitant to trust an ADS [38]. Older drivers tend to distrust ADS, especially if they do not understand how the systems operate [30]. This is even more problematic when the driving automation (i.e., ADS or ADAS) seems to fail and drivers do not comprehend why [39,40,41].
Third, the level of anxiety associated with the use of driving automation has also shown to vary by age groups. Anxiety has also been identified as an important factor in understanding the adoption of driving automation among different age groups [8,42]. Defined as a feeling of fear, worry, apprehension, or concern, anxiety can reduce cue utilization, shrink the perspective field, or reduce an individual’s environment scan [43]. Reimer, Mehler, Coughlin, Godfrey, and Tan [28], in 2009, designed a field study using a real vehicle and found that older drivers tend to experience more anxiety when employing an ADAS. Anxiety has been shown to be negatively correlated with trust and ADS adoption [8]. High levels of anxiety can discourage drivers from trusting and further adopting ADS.
Finally, to be clear, there is literature that has shown that younger drivers do not always have positive attitudes toward driving automation. A study employing an automated driving simulator found that younger drivers’ use of an ADS decreased their driving enjoyment [44]. Another study employing an automated driving simulator found that middle-aged drivers thought that ADAS were less useful than older drivers did [38].
Building on and integrating the literature on driver’s age and driving automation along with the literature on AV explanation, we expect to see age differences on the impact of AV explanation on outcomes such as trust and anxiety. There are several reasons to expect age differences. First, age differences regarding trust and anxiety have been found with regard to advance driving automation. Younger drivers are more inclined to trust driving automation, followed by middle-aged drivers, while older drivers appear to be the most reluctant to trust driving automation [30,37,38]. Moreover, older drivers appear to experience more anxiety and stress than drivers in other age groups when employing driving automation [28].
That said, we also know that AV explanations can promote trust and lower anxiety [8,9,10,45]. In particular, the timing of the explanation can be important for promoting trust. Explanation before rather than after the AV has taken action reduces the driver’s uncertainty about the driving situation, further increasing AV trust [8,11] and acceptance of the AV [11]. Similarly, Koo, Shin, Steinert, and Leifer [10] found that when the AV explained what it was going to do before it acted, it significantly decreased drivers’ anxiety associated with driving. In addition, previous studies have also illustrated the importance of automation autonomy on drivers’ perceptions of the automation (e.g., computer control, driving automation). The degree of autonomy refers to how much independence the automation has with regard to making decisions and taking actions without human intervention. Research has shown that automation with a high degree of autonomy is often less trusted [46,47].
Based on this literature, we derived the following two hypotheses to answer this research question: Does the driver’s age influence the relationship between AV explanations and the driver’s anxiety and trust?
Hypothesis 1.
There will be mean differences in drivers’ AV trust both within and between age groups across AV explanation conditions.
Hypothesis 2.
There will be mean differences in drivers’ anxiety both within and between age groups across AV explanation conditions.

4. Materials and Methods

This research complied with the American Psychological Association code of ethics and was approved by the university’s institutional review board. All participants provided informed consent.

4.1. Participants

A total of 40 drivers (mean age = 34.9 years, standard deviation (SD) = 17.3 years) participated in this study. Before conducting the study, a power analysis was performed to determine the sample size. The effect size (ES) in this study was 0.5, considered to be medium using Cohen’s (1988) criteria. With alpha = 0.05 and power = 0.8, the total sample size needed with this effect size (GPower3.1) was 12 for this “ANOVA: repeated measures, within-between interaction” group comparison. The results indicated that the 40 participants in this study comprised an amount sufficient to produce statistically significant results.
Participants were divided into three age groups: younger, middle-age, and older adults. Twelve younger drivers (mean age = 21.5 years, SD = 0.26 years, 6 women) and 20 middle-aged drivers (mean age = 30.1 years, SD = 0.64 years, 5 women) were recruited from e-mail groups, and 8 older drivers (mean age = 66.9 years, SD = 0.75 years, 4 women) were recruited by advertisements on the University of Michigan Health Research Web site. All participants were screened for inclusion criteria including driver’s license status, visual and hearing impairments, and susceptibility to simulator sickness. Each subject was paid $20 for participating.

4.2. Study Design

We conducted a mixed-design experiment with a 4 (AV explanation conditions) × 3 (age groups) design in a controlled lab setting to examine the hypotheses. The human-subject experiment involved 40 participants using a high-fidelity driving simulator. The sequence of the four AV explanation conditions was counterbalanced via a Latin square design. In each AV explanation condition, there were three unexpected and unique events (i.e., events by other drivers, events by police vehicles, and events of unexpected reroutes) in the environments of urban, highway, and rural. The simulated environment consisted of urban, rural, and highway environments that are typical in the United States. The urban and rural roads were four lanes, two for each traffic direction, separated by lane markings. The highways comprised two two-lane roadways separated by grass median strips. All participants were exposed to the four exact same conditions with the three exact same events in each condition. The driving weather was sunny with clear visibility and the road conditions were good. Both the weather and road conditions remained the same across all conditions.

4.3. Independent Variables

The independent variables in this study included the driver’s age and the AV explanation conditions. Driver’s age consisted of three age groups which were based on the age categories used in other studies [48,49,50]. The three age groups were younger drivers (18–24 years), middle-aged drivers (25–54 years), and older drivers (55 years and older). There were four AV explanation conditions. The first condition was the no-AV-explanation condition. In this condition, the AV provided no explanation about its actions to the driver. The second was the AV-explanation-before-action condition. In this condition, the AV provided an explanation to the driver prior to it taking the action. The third explanation condition was the AV-explanation-after-action condition. In this condition, the AV provided an explanation after it took an action. In the fourth condition, the AV provided an explanation then asked for the driver’s approval before taking or not taking any action. For example, before taking action the AV would explain “Unclear lane lines—reroute?” If the participant responded with a “Yes”, the AV would reroute; otherwise, the AV would continue with its original route.

4.4. Control Variables

The study includes participants’ trust propensity and physical workload as the control variables to reduce the possibility of alternative explanations. These variables have been found to influence people’s trust in AVs in previous studies [51,52].

4.5. Dependent Variables

The dependent variables in this study include trust and anxiety. We measured trust by adapting Muir’s (1987) 7-point Likert scale (1: strongly disagree; 7: strongly agree) [53], which is a highly validated automation trust scale comparable with Jian’s trust scale [54]. The Muir scale consisted of six dimensions: competence, predictability, dependability, responsibility, reliability, and faith. We measured anxiety using a questionnaire adapted from Nass et al. [55] that is used to measure driver attitude. Anxiety comprised the averaged responses to four adjective items to describe feelings while driving the AV: fearful, afraid, anxious, and uneasy. All the items were rated on a 7-point Likert rating scale (1: describes very poorly; 7: describes very well).

4.6. Apparatus

Participants rode a programmed AV in a simulated environment with a high-fidelity advanced driving simulator (Figure 2). The simulator consisted of a Nissan Versa sedan providing all manual controls and a simulation system running with programmable software (version 2.63 of Realtime Technology’s RTI). Four projectors displayed the visual environment to participants on four flat walls. The forward road scenes were projected on three walls about 16 feet in front of the driver (120-degree field of view), and the rear view was shown on a rear wall located 12 feet from the steering (40-degree field of view). Each forward screen was set at a resolution of 1400 × 1050 pixels and updated at 60 Hz, and the rear screen was set at a resolution of 1024 × 768 pixels.
In this study, the automation features of the driving simulator were programmed to simulate an AV with SAE level 4, wherein the driver was not required to actively monitor the environment and the longitudinal and lateral vehicle control, navigation, and responses to traffic control devices and other traffic elements were all undertaken by the AV. All production vehicle controls (e.g., turn signals, headlights, shifter) functioned as normal. The AV was able to function in all driving situations as well as the average human driver and obeyed all traffic laws.
After starting a simulated drive, each participant was instructed to engage automation manually by pushing a button located on the lower right side of the steering wheel labeled “ON/OFF”, and then she/he would no longer need to actively monitor the roadway or control the vehicle.
To present the event explanations to participants, the simulator employed a neutral tone of a male voice with a standard American accent. As shown in Table 2, the events across four AV explanation conditions were chosen from previous literature and corresponded to realistic unexpected situations in automated driving [8,56]. All the events were programmable considering the accessibility of the driving simulator.

4.7. Procedure

Upon arrival, participants were briefed on the experiment and signed a consent form. Participants then completed a demographics survey. Participants received a 3-minute training session prior to the actual experiment. In the training session, participants were instructed about the AV’s attributes. Specifically, the participants were told that the vehicle is able to drive safely entirely on its own; the car is able to function in all driving situations as well as the average human driver; it obeys all traffic laws; it receives navigation information from external sources similar to Google Maps, and can change routes to reach a destination more quickly if one is identified or available; and the autonomous vehicle maintains lanes by visually sensing the lane lines on the roadway.
Participants were shown how to transfer the AV from manual control to automated mode by placing the vehicle in the center of the right lane and pressing the automated mode activation button. Participants also practiced giving permission to the AV via their verbal input. After the training, participants experienced a 60-minute experimental session with the four explanation conditions, as described. In each AV explanation condition, participants engaged in a 6- to 8-minute drive with events occurring at prescribed times in the drive at intervals of 1–2 min.
After each explanation condition, which included three events each, participants completed a follow-up survey consisting of two questionnaires to measure trust and anxiety. All questionnaire items were adapted from validated prior research. There was a 2-minute break between AV explanation conditions.

5. Results

To determine whether the measurement constructs were valid and reliable, we assessed construct validity and reliability. Construct validity determines the extent to which a scale captures the concept it is supposed to measure [57]. Convergent and discriminant validity are two subtypes of validity that make up construct validity. Both were assessed through exploratory factor analysis. Scale items that loaded at 0.70 or above on their corresponding construct indicate convergent validity while scale items that loaded at 0.35 or below on other constructs indicate discriminant validity [58]. All scale items generally met or exceeded both requirements. Construct reliability is a measure of the internal consistency associated with a set of scale items [59]. Cronbach’s alpha is the most widely used measure of reliability [60,61]. All construct reliabilities were at or above the acceptable threshold of 0.70 [62]. In addition, Table 3 lists the means, standard deviations, mode, median, and correlations.
To test the hypotheses with data from the 40 participants, we used the SPSS Statistical 24 mixed linear model package. The alpha was set at 0.05 for all statistical tests. All post hoc comparisons utilized a Bonferroni alpha correction.
The mixed design controlled for the individual differences in prior driving automation experience. Nonetheless, we tested for individual differences in prior experience using various forms of ADAS or ADS. More specifically, individuals reported their prior experience with cruise control systems, adaptive cruise control systems, lane-departure warning systems, lane-keeping assistance systems, collision warning systems, and emergency braking systems. We found no significant differences among the three age groups in terms of their experience with ADAS or ADS (F = 1.097, p = 0.345).

5.1. The Effect of Age and AV Explanation on Trust

The results of the Two-Way Mixed ANOVA showed that there was a significant interaction between age and AV explanation (F (6, 49) = 2.336, p = 0.035), as shown in Table 4. The following subsections present the results of the post hoc comparisons.

5.1.1. Trust among Age Groups

For the no-AV-explanation condition, results showed that trust for middle-aged drivers (μMid-age = 5.49) was significantly lower than trust for the younger drivers (μYounger = 5.88). However, no difference was found between middle-aged and older drivers (μOlder = 5.54), as well as between younger and older drivers (p > 0.05). Table 5 provides the means and standard deviations for each condition. The means and their corresponding significant p values are depicted in Figure 3a.
There was no significant difference in trust (p > 0.05) among three age groups in the AV-explanation-before-action condition (μYounger = 6.17; μMid-age = 5.96; μOlder = 5.91). Table 5 provides the means and standard deviations for each condition. Figure 3b visually depicts the means and their corresponding significant p values.
The AV-explanation-after-action results showed that the trust for middle-aged drivers (μMid-age = 4.94) was significantly lower (p < 0.05) than for both the younger (μYounger = 5.99) and older (μOlder = 5.54) drivers in the AV-explanation-after-action condition. No difference in trust (p > 0.05) was found between younger and older drivers. Table 5 provides the means and standard deviations for each condition. Figure 3c visually depicts the means and their corresponding significant p values.
The request-for-permission-results showed that the trust for older drivers (μOlder = 6.00) was significantly higher (p < 0.05) than for middle-aged drivers (μMid-age = 5.54) in the request-for-permission condition. However, no difference in trust (p < 0.05) was found between younger and older drivers (μOlder = 5.54), or between younger and middle-aged drivers (p > 0.05) in this condition. Table 5 provides the means and standard deviations for each condition. Figure 3d visually depicts the means and their corresponding significant p values.

5.1.2. Trust within Age Groups

For younger drivers, trust was significantly lower (p < 0.05) in the request-for-permission condition (μPermReq = 5.55) than in the no-explanation (μNExpl = 5.88), explanation-before-action (μBExpl = 6.17), and explanation-after-action (μAExpl = 5.99) conditions. Additionally, the AV-explanation-before-action condition led to higher (p < 0.05) trust compared to the no-AV-explanation and request-for-permission conditions. However, there were no differences in trust between the no-AV-explanation condition and the request-for-permission condition (p > 0.05). Table 5 provides the means and standard deviations for each condition. Figure 4a visually depicts the means and their corresponding significant p values.
For middle-aged drivers, trust was significantly higher (p < 0.05) in the AV-explanation-before-action condition (μBExpl = 5.96) than in the no-AV-explanation (μNExpl = 5.49), AV-explanation-after-action (μAExpl = 4.94), and request-for-permission (μPermReq = 5.54) conditions. However, the AV-explanation-after-action condition led to the lowest trust (p < 0.001). No significant difference was found in trust between the no-AV-explanation and request-for-permission conditions (p > 0.05). Table 5 provides the means and standard deviations for each condition. Figure 4b visually depicts the means and their corresponding significant p values.
For older drivers, trust was significantly lower (p < 0.05) in the no-AV-explanation (μNExpl = 5.54) and AV-explanation-after-action (μAExpl = 5.54) conditions than in the AV-explanation-before-action (μBExpl = 5.91) and request-for-permission (μPermReq = 6.00) conditions. However, no difference in trust was found between the AV-explanation-before-action and request-for-permission conditions (p > 0.05), and no difference was found between the no-AV-explanation and AV-explanation-after-action conditions (p > 0.05). Table 5 provides the means and standard deviations for each condition. Figure 4c visually depicts the means and their corresponding significant p values.

5.2. The Effect of Age and AV Explanation on Anxiety

The results of the Two-Way Mixed ANOVA showed that there was no significant interaction between age and AV explanation (F (6, 48) = 0.652, p = 0.689) as shown in Table 6. The following subsections use the exploratory data analysis as an approach to explore the data and to summarize the main characteristic of the anxiety among and within age groups.

5.2.1. Anxiety among Age Groups

For the no-AV-explanation condition, the results showed that the anxiety for younger drivers (μYounger = 3.18) was significantly higher (p < 0.05) than for middle-aged (μMid-age = 2.40) and the older (μOlder = 2.13) drivers. No significant difference was found (p > 0.05) between middle-aged and older drivers. Table 7 provides the means and standard deviations for each condition. The means and their corresponding significant p values are depicted in Figure 5a.
For the AV-explanation-before-action condition there was no significant difference in anxiety (p > 0.05) among the three age groups in the before-explanation condition (μYounger = 2.81; μMid-age = 2.36; μOlder = 2.34). Table 7 provides the means and standard deviations for each condition. Figure 5b visually depicts the means and their corresponding significant p values.
For the AV-explanation-after-action condition, no significant difference in anxiety was found (p > 0.05) among the three age groups in the after-explanation condition (μYounger = 2.50; μMid-aged = 2.71; μOlder = 2.25). Table 7 provides the means and standard deviations for each condition. Figure 5c visually depicts the means and their corresponding significant p values.
For the request-for-permission condition, there was no significant difference in anxiety (p > 0.05) among the three age groups in the permission-required condition (μYounger = 2.81; μMid-aged = 2.29; μOlder = 1.97). Table 7 provides the means and standard deviations for each condition. Figure 5d visually depicts the means and their corresponding significant p values.

5.2.2. Anxiety within Age Groups

For younger drivers, anxiety was significantly higher (p < 0.05) in the no-AV-explanation condition (μNExpl = 3.18) than in the AV-explanation-before-action (μBExpl = 2.81), AV-explanation-after-action (μAExpl = 2.50), and request-for-permission (μPermReq = 2.81) conditions. However, there were no differences in anxiety among the AV-explanation-before-action, AV-explanation-after-action, and request-for-permission conditions (p > 0.05). Table 7 provides the means and standard deviations for each condition. Figure 6a visually depicts the means and their corresponding significant p values.
For middle-aged drivers, the highest anxiety was generated (p < 0.05) in the AV-explanation-after-action condition (μAExpl = 2.71) compared to the no-AV-explanation (μNExpl = 2.40), AV-explanation-before-action (μBExpl = 2.36), and request-for-permission (μPermReq = 2.29) conditions. However, there were no differences in anxiety among the no-explanation, AV-explanation-before-action, and request-for-permission conditions (p > 0.05). Table 7 provides the means and standard deviations for each condition. Figure 6b visually depicts the means and their corresponding significant p values.
For older drivers, no difference (p > 0.05) in anxiety was found among the no-AV-explanation condition (μNExpl = 2.13), AV-explanation-before-action condition (μBExpl = 2.34), AV-explanation-after-action condition (μAExpl = 2.25), and request-for-permission condition (μPermReq = 1.97). Table 7 provides the means and standard deviations for each condition. Figure 6c visually depicts the means and their corresponding significant p values.

6. Discussion

The goal of this research was to understand how a driver’s age determines how effective AV explanations are at promoting the driver’s trust and reducing the driver’s anxiety. The results of this study highlight the importance of driver’s age in understanding the effects of AV explanations. For younger drivers, the AV-explanation-before-action and AV-explanation-after-action conditions led to the highest trust, while the request-for-permission condition led to the lowest trust. For middle-aged drivers, the AV-explanation-before-action condition had the highest trust, while the AV-explanation-after-action condition had the lowest trust. For older drivers, the request-for-permission and the AV-explanation-before-action conditions both produced the highest trust. Conversely, the AV-explanation-after-action condition resulted in the lowest trust.
There were also significant differences in drivers’ trust and anxiety between age groups across the AV explanation conditions. The AV-explanation-before-action condition produced the highest trust for all drivers regardless of age group. For the younger drivers, the AV-explanation-after-action condition was equally good, whereas for the older drivers the request-for-permission condition was equally good. In all, when available, AV explanation before action is the preferred approach. The request-for-permission approach seems to be best suited for older drivers. For older drivers, the request-for-permission approach produced the highest trust. The positive impact of the request for permission can be explained by prior research suggesting that older drivers struggle with handing over control of the driving [31]. The request-for-permission approach simply gives more of the control to the driver. Nonetheless, the AV-explanation-before-action condition produced benefits for older drivers similar to those of the permission condition. The request-for-permission approach, however, led to the lowest trust and highest anxiety for both the younger and the middle-aged drivers. That being said, this does not explain why younger and middle-aged drivers showed lower trust and higher anxiety compared to older drivers. Future research is needed to fully explore these findings.
The AV-explanation-after-action approach seems to be best suited for younger drivers. The AV-explanation-after-action approach led to the lowest trust for the middle-aged and older drivers, along with the highest anxiety for the middle-aged drivers. On the contrary, the AV-explanation-after-action approach led to the highest trust for younger drivers. Providing the explanation after the AV takes action could increase the uncertainty compared to the AV-explanation-before-action condition and force drivers to retrieve information and to understand why the AV took that action. To be clear, future research is needed to investigate this question.
The no-explanation approach produced mixed results for younger drivers. The no-explanation condition resulted in the most anxiety for younger drivers when compared to the other conditions (i.e., within age-group analysis). In addition, the no-explanation approach led to the highest anxiety for younger drivers when compared to middle-aged and older drivers (i.e., among age-group analysis). This means that providing explanations regardless of timing could significantly decrease younger drivers’ anxiety. That being said, younger drivers’ trust in the AV was also least impacted by not having an explanation. We would expect trust and anxiety to be negatively related. Future research should be conducted to better understand the contradictory results for younger drivers.

6.1. Research Implications

Our results have several implications for research on age and driving automation. Our findings highlight the important role that a driver’s age has in understanding the impact of AV explanation on trust in driving automation. For example, if we assume that the request-for-permission condition represents heightened control, our findings support prior literature on age and driving control. We found that the request-for-permission condition led to the highest trust only for older drivers. This aligns with prior literature that suggests that older drivers have the greatest difficulty with the loss of driving control [31]. For older drivers, the request-for-permission condition actually helps to alleviate this issue. This and other findings highlight the need to account for drivers’ age when theorizing and designing AVs and their corresponding explanations.
This study has implications for theories related to drivers’ age and driving automation. Several of our findings did not align with the prior literature on drivers’ age and driving automation. There are at least three ways to view these findings that appear to run counter to what we might expect given the prior literature. First, the differences from our findings might be a result of the level of automation (i.e., level 3 vs. level 4 vs. level 5). Second, the impacts of a driver’s age are not uniform or linear but instead vary in a way that is hard to predict. For example, the assertion that younger drivers trust technology the most, followed by middle-aged drivers, then older drivers is a uniform linear approach to predicting the impacts of drivers’ age on driving automation. Our results do not support this assertion; instead, we found that although a driver’s age is important, its effect is often difficult to predict. Third, both the level of automation and the driver’s age might have joint effects on driving automation outcomes such as trust.
Our results also contribute to the literature on socially inclusive AI. Many scholars have highlighted the problems of biased AI and the need to build an AI system that is more inclusive [16,17]. AI explainability has been shown to be important to the promotion of trust between humans and AI, yet to date little research has been conducted to understand how individual differences might help determine AI effectiveness. This study might be the first study to not only explore the impact of individual differences on AV explanations specifically, but also to explore the impact of individual differences on AI explanations generally. Results of our study clearly highlight that individual differences are important to understanding the effectiveness of AI explanations. We hesitate to generalize the results of this study or any one study to other settings or populations. However, we believe the results of this study do justify the need for future research to better understand how individual differences impact the effectiveness of AI explanations. In doing so, we take a step forward in designing AI that is more socially inclusive.

6.2. Design Implications

The findings in this study have several implications for AV design. First, AV explanations should be designed, in part, based on the driver’s age. Beyond this, our results provide guidelines for designing AVs. That being said, our findings apply across all age groups. For example, providing an AV explanation could be a universal approach to accommodating all age groups.
Second, there are important and meaningful differences across age groups. Based on our results, for older drivers the AV should be designed to ask for permission to take action before making any changes. On the contrary, for younger and middle-aged drivers the AV should be designed to avoid this option because of the lower trust and higher anxiety it introduces.
Additionally, this study focused on the AVs domain, but our results can be applied to other domains that involve AI explanations. The implication in such areas is that explanation is a key factor that provides transparency and explainability. However, to be inclusive, designers have to acknowledge that the user population consists of different age groups. This study indicates that age impacts the relationship of AI explanations with trust and anxiety. Designers should consider such differences and focus on decreasing anxiety and increasing trust in different age groups.

7. Conclusions

AV explanations are vital to promoting drivers’ trust and reducing their anxiety. Findings in this study highlight the importance of drivers’ age in understanding these effects. The implications of this study highlight the need for future research on AV explanations and drivers’ age. Implications of this study also provide opportunities for future research to build and expand on the ideas in this paper toward socially inclusive AI.
However, there are also limitations in this study that need to be adressed in the future research. First, although the experimental setting provides high internal validity, it has limitations with regard to external validity. For example, all participants in this study were recruited from a university-related subject pool. These individuals might be different with regard to their AV-related knowledge and experience when compared to others in the general population. In addition, participants might have engaged in hypothesis guessing and altered their responses based on what they thought the researcher desired. To be sure, we found no evidence of this in our study. Ultimately, future studies should be conducted in field settings to increase external validity. Second, the average level of trust was relatively high (i.e., 5.66 out of 7) and the anxiety level low (i.e., 2.50 out of 7) in this study. However, all averages were typical to levels found in prior studies that examined level 4 or 5 AVs (e.g., [63,64,65]).
Third, this study did not examine many other attributes associated with AVs and AV explanations. These include AV driving behaviors; explanations related to the definition, generation, selection, and evaluation of alternative courses of action for the driver; and the presentation of the explanations as well as the modality used to deliver the explanations [66]. For example, this study only examined the auditory modality. Future studies should be conducted to examine these and other possible attributes associated with AVs and AV explanations. Future research might even focus on what an AV should and should not explain. In all, there is clearly more research needed in this new area.

Author Contributions

Conceptualization, L.P.R.J. and X.J.Y.; methodology, L.P.R.J., X.J.Y., and Q.Z.; validation, L.P.R.J. and X.J.Y.; formal analysis, L.P.R.J., X.J.Y., and Q.Z.; writing—original draft preparation, L.P.R.J., X.J.Y., and Q.Z.; writing—review and editing, L.P.R.J., X.J.Y., and Q.Z.; visualization, L.P.R.J., X.J.Y., and Q.Z.; supervision, L.P.R.J. and X.J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by University of Michigan’s Mcity, grant number: 2017–2018.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of University of Michigan (HUM00131982; 8/24/2017).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Betz, M.; DiGuiseppi, C.; Villavicencio, L.; Kelley-Baker, T.; Kim, W.; Mielenz, T.; Eby, D.; Molnar, L.; Hill, L.; Strogatz, D. Discussions with Older Family Members about Safe Driving: Findings from the AAA LongROAD study. Available online: https://aaafoundation.org/discussions-with-older-family-members-about-safe-driving-findings-from-the-aaa-longroad-study/ (accessed on 11 February 2021).
  2. Papadoulis, A.; Quddus, M.; Imprialou, M. Evaluating the safety impact of connected and autonomous vehicles on motorways. Accid. Anal. Prev. 2019, 124, 12–22. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Virdi, N.; Grzybowska, H.; Waller, S.T.; Dixit, V. A safety assessment of mixed fleets with connected and autonomous vehicles using the surrogate safety assessment module. Accid. Anal. Prev. 2019, 131, 95–111. [Google Scholar] [CrossRef] [PubMed]
  4. Vespa, J.; Armstrong, D.M.; Medina, L. Demographic Turning Points for the United States: Population Projections for 2020 to 2060; US Department of Commerce, Economics and Statistics Administration: Suitland, MD, USA, 2018.
  5. Du, N.; Zhou, F.; Pulver, E.M.; Tilbury, D.M.; Robert, L.P.; Pradhan, A.K.; Yang, X.J. Examining the effects of emotional valence and arousal on takeover performance in conditionally automated driving. Transp. Res. Part C Emerg. Technol. 2020, 112, 78–87. [Google Scholar] [CrossRef]
  6. Jayaraman, S.K.; Creech, C.; Tilbury, D.M.; Yang, X.J.; Pradhan, A.K.; Tsui, K.M.; Robert, L.P., Jr. Pedestrian trust in automated vehicles: Role of traffic signal and av driving behavior. Front. Robot. AI 2019, 6, 117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. On-Road Automated Driving (ORAD) Committee. Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles; Sage International: Warrendale, PA, USA, 2018. [Google Scholar]
  8. Du, N.; Haspiel, J.; Zhang, Q.; Tilbury, D.; Pradhan, A.K.; Yang, X.J.; Robert, L.P., Jr. Look who’s talking now: Implications of AV’s explanations on driver’s trust, AV preference, anxiety and mental workload. Transp. Res. Part C Emerg. Technol. 2019, 104, 428–442. [Google Scholar] [CrossRef]
  9. Koo, J.; Kwac, J.; Ju, W.; Steinert, M.; Leifer, L.; Nass, C. Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance. Int. J. Interact. Des. Manuf. (IJIDEM) 2015, 9, 269–275. [Google Scholar] [CrossRef]
  10. Koo, J.; Shin, D.; Steinert, M.; Leifer, L. Understanding driver responses to voice alerts of autonomous car operations. Int. J. Veh. Des. 2016, 70, 377–392. [Google Scholar] [CrossRef]
  11. Forster, Y.; Naujoks, F.; Neukum, A. Increasing anthropomorphism and trust in automated driving functions by adding speech output. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 365–372. [Google Scholar]
  12. Stutts, J.C.; Martell, C. Older driver population and crash involvement trends, 1974–1988. Accid. Anal. Prev. 1992, 24, 317–327. [Google Scholar] [CrossRef]
  13. Lee, C.; Coughlin, J.F. PERSPECTIVE: Older adults’ adoption of technology: An integrated approach to identifying determinants and barriers. J. Prod. Innov. Manag. 2015, 32, 747–759. [Google Scholar] [CrossRef]
  14. Andersen, G.J.; Enriquez, A. Aging and the detection of observer and moving object collisions. Psychol. Aging 2006, 21, 74. [Google Scholar] [CrossRef]
  15. Shanmugaratnam, S.; Kass, S.J.; Arruda, J.E. Age differences in cognitive and psychomotor abilities and simulated driving. Accid. Anal. Prev. 2010, 42, 802–808. [Google Scholar] [CrossRef]
  16. Robert, L.P.; Pierce, C.; Marquis, L.; Kim, S.; Alahmad, R. Designing fair AI for managing employees in organizations: A review, critique, and design agenda. Hum. Comput. Interact. 2020, 35, 545–575. [Google Scholar] [CrossRef]
  17. Robert, L.P., Jr.; Bansal, G.; Lütge, C. ICIS 2019 SIGHCI Workshop Panel Report: Human–Computer Interaction Challenges and Opportunities for Fair, Trustworthy and Ethical Artificial Intelligence. Ais Trans. Hum. Comput. Interact. 2020, 12, 96–108. [Google Scholar] [CrossRef]
  18. Borup, M.; Brown, N.; Konrad, K.; Van Lente, H. The sociology of expectations in science and technology. Technol. Anal. Strateg. Manag. 2006, 18, 285–298. [Google Scholar] [CrossRef]
  19. Pu, P.; Chen, L. Trust building with explanation interfaces. In Proceedings of the the 11th International Conference on Intelligent User Interfaces; Association for Computing Machinery: New York, NY, USA, 2006; pp. 93–100. [Google Scholar]
  20. Ruijten, P.A.; Terken, J.; Chandramouli, S.N. Enhancing trust in autonomous vehicles through intelligent user interfaces that mimic human behavior. Multimodal Technol. Interact. 2018, 2, 62. [Google Scholar] [CrossRef] [Green Version]
  21. Glass, A.; McGuinness, D.L.; Wolverton, M. Toward establishing trust in adaptive agents. In Proceedings of the 13th International Conference on Intelligent User Interfaces; Association for Computing Machinery: New York, NY, USA, 2018; pp. 227–236. [Google Scholar]
  22. Thill, S.; Hemeren, P.E.; Nilsson, M. The apparent intelligence of a system as a factor in situation awareness. In Proceedings of the 2014 IEEE International Inter-Disciplinary Conference Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), San Antonio, TX, USA, 3–6 March 2014; pp. 52–58. [Google Scholar]
  23. Wiegand, G.; Schmidmaier, M.; Weber, T.; Liu, Y.; Hussmann, H. I Drive-You Trust: Explaining Driving Behavior Of Autonomous Cars. In Proceedings of the Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1–6. [Google Scholar]
  24. Körber, M.; Baseler, E.; Bengler, K. Introduction matters: Manipulating trust in automation and reliance in automated driving. Appl. Ergon. 2018, 66, 18–31. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Trubia, S.; Severino, A.; Curto, S.; Arena, F.; Pau, G. Smart Roads: An Overview of What Future Mobility Will Look Like. Infrastructures 2020, 5, 107. [Google Scholar] [CrossRef]
  26. Rovira, E.; McGarry, K.; Parasuraman, R. Effects of imperfect automation on decision making in a simulated command and control task. Hum. Factors 2007, 49, 76–87. [Google Scholar] [CrossRef]
  27. Verberne, F.M.; Ham, J.; Midden, C.J. Trust in smart systems: Sharing driving goals and giving information to increase trustworthiness and acceptability of smart systems in cars. Hum. Factors 2012, 54, 799–810. [Google Scholar] [CrossRef]
  28. Reimer, B.; Mehler, B.; Coughlin, J.F.; Godfrey, K.M.; Tan, C. An on-road assessment of the impact of cognitive workload on physiological arousal in young adult drivers. In Proceedings of the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications; Association for Computing Machinery: New York, NY, USA, 2009; pp. 115–118. [Google Scholar]
  29. Abraham, H.; Lee, C.; Brady, S.; Fitzgerald, C.; Mehler, B.; Reimer, B.; Coughlin, J.F. Autonomous vehicles and alternatives to driving: Trust, preferences, and effects of age. In Proceedings of the Transportation Research Board 96th Annual Meeting (TRB’17), Washington, DC, USA, 8–12 January 2017. [Google Scholar]
  30. Charness, N.; Yoon, J.S.; Souders, D.; Stothart, C.; Yehnert, C. Predictors of attitudes toward autonomous vehicles: The roles of age, gender, prior knowledge, and personality. Front. Psychol. 2018, 9, 2589. [Google Scholar] [CrossRef] [PubMed]
  31. Li, S. Investigating Older Drivers’ Takeover Performance and Requirements to Facilitate Safe and Comfortable Human-Machine Interactions in Highly Automated Vehicles. Ph.D. Thesis, Newcastle University, Newcastle, UK, 2019. [Google Scholar]
  32. Furlan, A.; Vrkljan, B.; Abbas, H.H.; Babineau, J.; Campos, J.; Haghzare, S.; Kajaks, T.; Tiong, M.; Vo, M.; Lavalliere, M. The Impact of Advanced Vehicle Technologies on Older Driver Safety: A Scoping Review of Subjective Outcomes. In Proceedings of Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications; Association for Computing Machinery: New York, NY, USA, 2018; pp. 186–191. [Google Scholar]
  33. Haghzare, S.; Campos, J.; Mihailidis, A. Identifying the Factors Influencing Older Adults’ Perceptions of Fully Automated Vehicles. In Proceedings of the Adjunct Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications; Association for Computing Machinery: New York, NY, USA, 2018; pp. 98–103. [Google Scholar]
  34. Kraus, J.M.; Nothdurft, F.; Hock, P.; Scholz, D.; Minker, W.; Baumann, M. Human after all: Effects of mere presence and social interaction of a humanoid robot as a co-driver in automated driving. In Proceedings of the Adjunct Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Ann Arbor, MI, USA, 24–26 October 2016; pp. 129–134. [Google Scholar]
  35. Manchon, J.; Bueno, M.; Navarro, J. From manual to automated driving: How does trust evolve? Theor. Issues Ergon. Sci. 2020, 1–27. [Google Scholar] [CrossRef]
  36. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An integrative model of organizational trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
  37. Frison, A.-K.; Aigner, L.; Wintersberger, P.; Riener, A. Who is generation A? Investigating the experience of automated driving for different age groups. In Proceedings of the 10th International Conference on Automotive User Interfaces and Interactive Vehicular Applications; Association for Computing Machinery: New York, NY, USA, 2018; pp. 94–104. [Google Scholar]
  38. Donmez, B.; Boyle, L.N.; Lee, J.D.; McGehee, D.V. Drivers’ attitudes toward imperfect distraction mitigation strategies. Transp. Res. Part F Traffic Psychol. Behav. 2006, 9, 387–398. [Google Scholar] [CrossRef]
  39. Dzindolet, M.T.; Peterson, S.A.; Pomranky, R.A.; Pierce, L.G.; Beck, H.P. The role of trust in automation reliance. Int. J. Hum. Comput. Stud. 2003, 58, 697–718. [Google Scholar] [CrossRef]
  40. Niculescu, A.I.; Dix, A.; Yeo, K.H. Are you ready for a drive? User perspectives on autonomous vehicles. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 2810–2817. [Google Scholar]
  41. Trübswetter, N.; Bengler, K. Why Should I Use ADAS? Advanced Driver Assistance Systems and the Elderly: Knowledge, Experience and Usage Barriers. Available online: https://core.ac.uk/download/pdf/193221066.pdf (accessed on 10 February 2021).
  42. Kraus, J.; Scholz, D.; Messner, E.-M.; Messner, M.; Baumann, M. Scared to Trust?–Predicting Trust in Highly Automated Driving by Depressiveness, Negative Self-Evaluations and State Anxiety. Front. Psychol. 2020, 10, 2917. [Google Scholar] [CrossRef] [PubMed]
  43. Staal, M.A. Stress, Cognition, and Human Performance: A Literature Review and Conceptual Framework. Available online: https://www.researchgate.net/publication/267403286_Stress_Cognition_and_Human_Performance_A_Literature_Review_and_Conceptual_Framework (accessed on 10 February 2021).
  44. Hartwich, F.; Beggiato, M.; Krems, J.F. Driving comfort, enjoyment and acceptance of automated driving–effects of drivers’ age and driving style familiarity. Ergonomics 2018, 61, 1017–1032. [Google Scholar] [CrossRef] [PubMed]
  45. Naujoks, F.; Forster, Y.; Wiedemann, K.; Neukum, A. Improving usefulness of automated driving by lowering primary task interference through HMI design. J. Adv. Transp. 2017, 2017. [Google Scholar] [CrossRef] [Green Version]
  46. Endsley, M.R. Level of automation effects on performance, situation awareness and workload in a dynamic control task. Ergonomics 1999, 42, 462–492. [Google Scholar] [CrossRef] [Green Version]
  47. Rödel, C.; Stadler, S.; Meschtscherjakov, A.; Tscheligi, M. Towards autonomous cars: The effect of autonomy levels on acceptance and user experience. In Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications; Association for Computing Machinery: New York, NY, USA, 2014; pp. 1–8. [Google Scholar]
  48. Waller, P.F. The older driver. Hum. factors 1991, 33, 499–505. [Google Scholar] [CrossRef]
  49. Kervick, A.A.; Hogan, M.J.; O’Hora, D.; Sarma, K.M. Testing a structural model of young driver willingness to uptake Smartphone Driver Support Systems. Accid. Anal. Prev. 2015, 83, 171–181. [Google Scholar] [CrossRef]
  50. Engström, I.; Gregersen, N.P.; Granström, K.; Nyberg, A. Young drivers—reduced crash risk with passengers in the vehicle. Accid. Anal. Prev. 2008, 40, 341–348. [Google Scholar]
  51. Hoff, K.A.; Bashir, M. Trust in automation: Integrating empirical evidence on factors that influence trust. Hum. Factors 2015, 57, 407–434. [Google Scholar] [CrossRef]
  52. Schaefer, K.E.; Chen, J.Y.; Szalma, J.L.; Hancock, P.A. A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Hum. Factors 2016, 58, 377–400. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Muir, B.M. Trust between humans and machines, and the design of decision aids. Int. J. Man-Mach. Stud. 1987, 27, 527–539. [Google Scholar] [CrossRef]
  54. Jian, J.-Y.; Bisantz, A.M.; Drury, C.G. Foundations for an empirically determined scale of trust in automated systems. Int. J. Cogn. Ergon. 2000, 4, 53–71. [Google Scholar] [CrossRef]
  55. Nass, C.; Jonsson, I.-M.; Harris, H.; Reaves, B.; Endo, J.; Brave, S.; Takayama, L. Improving automotive safety by pairing driver emotion and car voice emotion. In Proceedings of the CHI’05 Extended Abstracts on Human Factors in Computing Systems; Association for Computing Machinery: New York, NY, USA, 2005; pp. 1973–1976. [Google Scholar]
  56. Lenné, M.G.; Triggs, T.J.; Mulvihill, C.M.; Regan, M.A.; Corben, B.F. Detection of emergency vehicles: Driver responses to advance warning in a driving simulator. Hum. Factors 2008, 50, 135–144. [Google Scholar] [CrossRef]
  57. Bagozzi, R.P.; Yi, Y.; Phillips, L.W. Assessing construct validity in organizational research. Adm. Sci. Q. 1991, 421–458. [Google Scholar] [CrossRef]
  58. Fornell, C.; Larcker, D.F. Structural Equation Models with Unobservable Variables and Measurement Error: Algebra and Statistics; Sage Publications Sage CA: Los Angeles, CA, USA, 1981. [Google Scholar]
  59. Netemeyer, R.G.; Bearden, W.O.; Sharma, S. Scaling Procedures: Issues and applications; Sage Publications: Los Angeles, CA, USA, 2003. [Google Scholar]
  60. Streiner, D.L. Starting at the beginning: An introduction to coefficient alpha and internal consistency. J. Personal. Assess. 2003, 80, 99–103. [Google Scholar] [CrossRef] [PubMed]
  61. Cronbach, L.J. Coefficient alpha and the internal structure of tests. Psychometrika 1951, 16, 297–334. [Google Scholar] [CrossRef] [Green Version]
  62. Hulin, C.; Netemeyer, R.; Cudeck, R. Can a reliability coefficient be too high? J. Consum. Psychol. 2001, 10, 55–58. [Google Scholar]
  63. Desmond, P.A.; Hancock, P.A.; Monette, J.L. Fatigue and automation-induced impairments in simulated driving performance. Transp. Res. Rec. 1998, 1628, 8–14. [Google Scholar] [CrossRef]
  64. Tussyadiah, I.P.; Zach, F.J.; Wang, J. Attitudes toward autonomous on demand mobility system: The case of self-driving taxi. In Information and Communication Technologies in Tourism 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 755–766. [Google Scholar]
  65. Jung, M.F.; Sirkin, D.; Gür, T.M.; Steinert, M. Displayed uncertainty improves driving experience and behavior: The case of range anxiety in an electric car. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Korea, 18–23 April 2015; pp. 2201–2210. [Google Scholar]
  66. Miller, T. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 2019, 267, 1–38. [Google Scholar] [CrossRef]
Figure 1. Projections of the older adult population: 2020 to 2060 [4].
Figure 1. Projections of the older adult population: 2020 to 2060 [4].
Sustainability 13 01948 g001
Figure 2. Driving simulator.
Figure 2. Driving simulator.
Sustainability 13 01948 g002
Figure 3. The average trust between age groups under four different conditions: (a) the no-explanation condition; (b) the AV-explanation-before-action condition; (c) the AV-explanation-after-action condition; and (d) the request-for-permission condition.
Figure 3. The average trust between age groups under four different conditions: (a) the no-explanation condition; (b) the AV-explanation-before-action condition; (c) the AV-explanation-after-action condition; and (d) the request-for-permission condition.
Sustainability 13 01948 g003
Figure 4. The average trust under four different conditions by age group: (a) younger drivers; (b) middle-aged drivers; and (c) older drivers.
Figure 4. The average trust under four different conditions by age group: (a) younger drivers; (b) middle-aged drivers; and (c) older drivers.
Sustainability 13 01948 g004
Figure 5. The average anxiety between age groups under four different conditions: (a) the no-explanation condition; (b) the AV-explanation-before-action condition; (c) the AV-explanation-after-action condition; and (d) the request-for-permission condition.
Figure 5. The average anxiety between age groups under four different conditions: (a) the no-explanation condition; (b) the AV-explanation-before-action condition; (c) the AV-explanation-after-action condition; and (d) the request-for-permission condition.
Sustainability 13 01948 g005
Figure 6. The average anxiety under four different conditions by age group: (a) younger drivers; (b) middle-aged drivers; and (c) older drivers.
Figure 6. The average anxiety under four different conditions by age group: (a) younger drivers; (b) middle-aged drivers; and (c) older drivers.
Sustainability 13 01948 g006aSustainability 13 01948 g006b
Table 1. SAE (J3016) automation levels [7].
Table 1. SAE (J3016) automation levels [7].
SAE LevelNameNarrative Definition
Human driver monitors the driving environment
0No
Automation
The full-time performance by the human driver of all aspects of the dynamic driving task, even when “enhanced by warning or intervention systems”
1Driver
Assistance
The driving mode-specific execution by a driver assistance system of “either steering or acceleration/deceleration”Using information about the driving environment and with the expectation that the human driver performs all remaining aspects of the dynamic driving task
2Partial
Automation
The driving mode-specific execution by one or more driver assistance systems of both steering and acceleration/deceleration
Automated driving system monitors the driving environment
3Conditional AutomationThe driving mode-specific performance by an automated driving system of all aspects of the dynamic driving taskWith the expectation that the human driver will respond appropriately to a request to intervene;
4High
Automation
even if a human driver does not respond appropriately to a request to intervene, the car can pull over safely by guiding system
5Full
Automation
under all roadway and environmental conditions that can be managed by a human driver
Table 2. Explanation event description.
Table 2. Explanation event description.
EventDescription
Efficiency Route ChangeThe AV rerouted in view of road construction ahead.
Swerving Vehicle AheadThe vehicle ahead was swerving, so the AV slowed down until the swerving vehicle exited the highway.
Oversized Vehicle AheadThere was an oversized vehicle ahead blocking roadway, so the AV slowed down until the oversized vehicle turned at the intersection.
Heavy Traffic ReroutingHeavy traffic jam was reported ahead, so the AV rerouted.
Police Vehicle ApproachingA police vehicle approached the AV from behind and activated siren. Then the AV pulled over and stopped.
Stopped Police Vehicle on ShoulderA police vehicle stopped on shoulder, so the AV changed lane to avoid collision.
Abrupt Stopped Truck AheadThere was roadway obstruction ahead. The AV changed lanes.
Road Hazard ReroutingThe AV rerouted because it identified a road hazard ahead.
Police Vehicle ApproachingA police vehicle approached the AV from behind and activated siren. Then the AV asked the driver’s permission to pull over.
Unclear Lane Markings ReroutingWhen the AV approached the intersection, the lane marking ahead was not clear. Then the AV asked the driver’s permission to reroute.
Vehicle with Flashing Hazard Lights AheadA vehicle in the left front lane was flashing its hazard light. Then the AV asked the driver’s permission to slow down.
Table 3. Descriptive statistics.
Table 3. Descriptive statistics.
VariableMeanStd. Dev.ModeMedianTrustAnxiety
Trust5.660.946.005.861
Anxiety2.501.251.002.13−0.36 **1
** Correlation is significant at the 0.01 level (2-tailed).
Table 4. ANOVA summary table of trust.
Table 4. ANOVA summary table of trust.
Source of VariationSum of SquaresdfMean SquareFp
(Intercept)18.235118.23529.3420.000
Explanation Condition4.61731.5392.4770.064
Age Groups4.30122.1513.4610.034
Explanation Condition x Age Groups8.71061.4522.3360.035
Physical Demand14.968114.96824.0840.000
Trust Propensity7.70917.70912.4040.001
Error89.4931440.621
Total138.944157
Note: “df” indicates degree of freedom; “F” indicates F statistic; “p” indicates p value.
Table 5. Mean and standard deviation of trust.
Table 5. Mean and standard deviation of trust.
Age GroupsNExplBExplAExplPermReq
MSDMSDMSDMSD
Younger5.880.256.170.205.990.305.550.28
Middle-aged5.490.205.960.154.940.235.540.22
Older5.540.315.910.245.540.366.000.35
Note: “NExpl” indicates no-AV-explanation condition; “BExpl” indicates AV-explanation-before-action condition; “AExpl” indicates AV-explanation-after-action condition; “PermReq” indicates request-for-permission condition; “M” indicates mean; “SD” indicates standard deviation.
Table 6. ANOVA summary table of anxiety.
Table 6. ANOVA summary table of anxiety.
Source of VariationSum of SquaresdfMean SquareFp
(Intercept)15.657115.65710.8750.001
Explanation Condition1.79430.5980.4150.742
Age Groups14.99527.4985.2080.007
Explanation Condition x Age Groups5.63060.9380.6520.689
Physical Demand19.423119.42313.4910.000
Trust Propensity2.27512.2751.5800.211
Error207.3191441.440
Total246.333157
Note: “df” indicates degree of freedom; “F” indicates F statistic; “p” indicates p value.
Table 7. ANOVA summary table of anxiety.
Table 7. ANOVA summary table of anxiety.
Age GroupsNExplBExplAExplPermReq
MSDMSDMSDMSD
Younger3.181.062.811.062.501.062.811.06
Middle-aged2.401.042.361.042.711.032.291.04
Older2.131.092.341.092.251.091.971.09
Note: “NExpl” indicates no-AV-explanation condition; “BExpl” indicates AV-explanation-before-action condition; “AExpl” indicates AV-explanation-after-action condition; “PermReq” indicates request-for-permission condition; “M” indicates mean; “SD” indicates standard deviation.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, Q.; Yang, X.J.; Robert, L.P., Jr. Drivers’ Age and Automated Vehicle Explanations. Sustainability 2021, 13, 1948. https://doi.org/10.3390/su13041948

AMA Style

Zhang Q, Yang XJ, Robert LP Jr. Drivers’ Age and Automated Vehicle Explanations. Sustainability. 2021; 13(4):1948. https://doi.org/10.3390/su13041948

Chicago/Turabian Style

Zhang, Qiaoning, Xi Jessie Yang, and Lionel P. Robert, Jr. 2021. "Drivers’ Age and Automated Vehicle Explanations" Sustainability 13, no. 4: 1948. https://doi.org/10.3390/su13041948

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop