Next Article in Journal
Machine-Learning-Aided Prediction of Flexural Strength and ASR Expansion for Waste Glass Cementitious Composite
Next Article in Special Issue
An Augmented Warning System for Pedestrians: User Interface Design and Algorithm Development
Previous Article in Journal
Traffic Accident Risk Assessment Framework for Qassim, Saudi Arabia: Evaluating the Impact of Speed Cameras
Previous Article in Special Issue
An Analysis of Vulnerable Road Users Overtaking Maneuver along the Urban Road
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Takeover Safety Analysis with Driver Monitoring Systems and Driver–Vehicle Interfaces in Highly Automated Vehicles

1
Department of Mechanical Engineering, Sungkyunkwan University, 2066 Seobu-ro, Suwon 16419, Korea
2
ICT Based Public Transportation Research Team, Korea Railroad Research Institute, Uiwang 16105, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(15), 6685; https://doi.org/10.3390/app11156685
Submission received: 8 June 2021 / Revised: 15 July 2021 / Accepted: 16 July 2021 / Published: 21 July 2021
(This article belongs to the Special Issue Human Factors in Transportation Systems)

Abstract

:
According to SAE J3016, autonomous driving can be divided into six levels, and partially automated driving is possible from level three up. A partially or highly automated vehicle can encounter situations involving total system failure. Here, we studied a strategy for safe takeover in such situations. A human-in-the-loop simulator, driver–vehicle interface, and driver monitoring system were developed, and takeover experiments were performed using various driving scenarios and realistic autonomous driving situations. The experiments allowed us to draw the following conclusions. The visual–auditory–haptic complex alarm effectively delivered warnings and had a clear correlation with the user’s subjective preferences. There were scenario types in which the system had to immediately enter minimum risk maneuvers or emergency maneuvers without requesting takeover. Lastly, the risk of accidents can be reduced by the driver monitoring system that prevents the driver from being completely immersed in non-driving-related tasks. We proposed a safe takeover strategy from these results, which provides meaningful guidance for the development of autonomous vehicles. Considering the subjective questionnaire evaluations of users, it is expected to improve the acceptance of autonomous vehicles and increase the adoption of autonomous vehicles.

1. Introduction

Autonomous cars are expected to become a new solution for safety, environmental, and traffic-related problems. Further, it is an important technology related to all core industries in the 4th industrial revolution, including the sharing economy, artificial intelligence, and the Internet of Things [1]. The popularization of autonomous vehicles combined with electric or fuel cell technology is expected to play a key role in efficient energy consumption and the construction of sustainable eco-friendly smart cities [2]. According to the SAE J3016 standard, autonomous driving can be classified into six levels: Level 0 (completely manual) to Level 5 (completely autonomous); the classification criteria include the driving responsibilities of the driver and driving support function level of the system [3]. Up to Level 2, the driver must always supervise vehicle control. At Level 3, the autonomous driving system can temporarily take over the control of the vehicle under certain conditions. Nevertheless, at Level 3, the driver must be able to take control at any time while keeping an eye on the driving situation, and the autonomous driving system must determine whether the driver can start driving [4]. Recently, UNECE/WP.29, the international vehicle safety standard-setting organization (International Conference on Harmonization of Vehicle Standards), announced the world’s first binding international regulation on Level 3 autonomous driving, which is summarized below [5]:
  • Automated lane-keeping system (ALKS) definition and activation criteria
  • System stability, including failsafe reaction
  • Driver availability recognition system and human–machine interface (HMI)
  • Autonomous driving system event recording and cyber security
California, U.S.A., is one region where research on autonomous driving is actively conducted. There have been licensed autonomous driving tester programs on public roads since 2014, and an unmanned autonomous driving test program has been operated since 2018 [6]. As of 25 February 2021, there were a total of 56 autonomous-vehicle testing permit holders and seven autonomous-vehicle driverless testing permit holders registered with the California Department of Motor Vehicles (DMV). Every year, both types of permit holders must track and report the number of times their vehicles needed to disengage from the autonomous mode during the tests. The top-level group drove about 1200 miles per disengagement in 2015 [7]; in 2020, they drove more than 23,000 miles per disengagement on average [8]. However, even in 2020, the autonomous driving system still frequently failed to respond except for the top group (on average, they drove 80 miles per disengagement) (Figure 1). The top causes for this were system failure (52%), driver factors (30%), and external environmental factors (11%) [9].
Takeover control between the system and the driver is one of the most critical functions in terms of the safety of a highly automated vehicle. The first step in this process is determining how ready the driver is to start driving [10]. The driver monitoring system (DMS) has been included as a primary safety item in Euro NCAP’s 2025 Roadmap [11]. Research using heart rate [12], respiration rate [13], and gaze response [14,15] is in progress to measure the cognitive load, distraction, and drowsiness of a person in the driver’s seat. The approach in which sensors are attached to the driver’s body decreases accuracy due to movement and causes inconvenience when driving. Although these shortcomings are being addressed [16], a monitoring system using a camera has been commercialized.
The takeover process proceeds with non-driving-related tasks (NDRTs), a disengagement scenario, and warnings raised through HMI as shown in Figure 2 [17,18]. Quantitatively evaluating the situation wherein the driver concentrates on other tasks in the autonomous driving mode is essential. Tasks such as phone calls, radio listening, video viewing, text entry, N-back tasks, and surrogate reference tasks (SuRTs) are then assigned to drivers, and metrics such as the NASA task load index (NASA-TLX) are used to evaluate the workload [19].
Next, based on statistical data, the disengagement scenarios are implemented under a sensor or system failure scenario or in other predefined scenarios [20,21]. Finally, the optic, acoustic, and haptic HMI are configured to warn the driver to return to driving, and the effectiveness of each notification method is analyzed [22,23,24]. The effect of the complexity of surrounding traffic vehicles or the type of NDRT on the takeover quality has been analyzed. The probability of collision and the tendency to avoid accidents through steering was significantly increased under high-density traffic conditions [25,26]. The type of NDRT did not significantly affect the time required for the takeover, but it affected the takeover quality [27]. In autonomous driving mode, the driver’s gaze behavior reflected the level of distractedness, and specific gaze parameters were suitable to assess the adequacy of the driver’s monitoring strategy [28]. Since studies on HMI configuration and driver response focus on average or median values or only on extremely urgent scenarios, differences among individuals must be considered [29].
In this study, we propose a strategy for increasing takeover safety in conditionally automated vehicle system failure situations. Various driver–vehicle interfaces (DVI) introduced in previous studies [17] and the UNECE ALKS proposal [5] were comprehensively analyzed. A realistic driving simulator was developed to conduct experiments and acquire driver data on high-risk situations where accidents may occur. Event scenarios were designed based on disengagement reports and traffic accident statistics to simulate situations in which takeover could occur. By having the participants perform NDRTs based on general time-consuming statistical data, the takeover requests were provided in realistic autonomous driving situations. As suggested for future work in previous studies, we devised a method to utilize DMS to improve driver readiness or reaction speed. Finally, we analyzed the characteristics of scenarios that are too risky to request takeover requests despite these efforts. Based on the overall experimental results, a safe takeover strategy was designed.
The paper proceeds as follows: Section 2 explains the development of an experimental environment that includes the driving simulator, DMS and DVI configuration, event scenarios, and the process of experiments. In Section 3, we discuss our analysis of the factors (DVI configuration, scenario types, DMS) affecting the safety of the transition process based on the experimental results. Finally, we propose a takeover strategy to maximize the safety of the takeover of highly automated vehicles.

2. Autonomous Vehicle Take over Simulation

2.1. Human-in-the-Loop Vehicle Simulator

Using a driving simulator for control transfer or HMI design has the following advantages: Various event scenarios can be created, and they can be provided equally to several experimenters. Experiments can be carried out even in high-risk situations where accidents may occur. For these reasons, related studies mainly used driving simulators [30,31,32,33,34]. To provide real vehicle experiences in the driving simulator, visual, auditory, and tactile cues and a vehicle motion generation system are important [35]. The driving simulator can be effectively used to study abnormal driver patterns such as drowsiness, fatigue, drunkenness, and taking drugs [36,37]. The experiment’s validity using the driving simulator was verified by comparing the driver’s responsiveness in the simulator and the real-world vehicle in several studies [38,39,40,41,42]. In a paper comparing the results of fixed-driving simulations with field tests, the validity of the simulation was verified in terms of surrogate task performance, visual attention, and driving performance [38]. The feasibility of driving simulators for speed, car-following distance, and reaction latency in the work zone was revealed, and some driving behaviors were slightly more aggressive in the simulator [39]. It was suggested that low-cost or fixed-base simulators could achieve similar effects to high-cost simulators and that the most important element for a successful behavioral validation study is a carefully designed experimental procedure and correct interpretation of the results [40]. It was found that there was no statistically significant difference between the simulator and the on-road vehicle for the human factor associated with the automated vehicle [41].
Figure 3 indicates the HITL simulator used in this study and depicts of the parts interacting with the driver and the actual hardware. The driver’s steering, gear, and pedal inputs were collected from a laptop and real-time processor and were transmitted to the main PC where the simulator was driven. The virtual driving environment was presented to the driver, visually very similar to what is seen from the driver’s seat of an actual vehicle, through a large-screen (55 inches) 4 K resolution TV. A touch monitor for NDRT was placed between the driver and the TV. Static environments such as roads, traffic lights, and buildings, as well as dynamic obstacles such as vehicles, pedestrians, and bicycle occupants, were implemented using IPG CarMaker. This software provides solutions for virtual test driving, including basic vehicle systems such as dynamics and powertrain, driving scenario generators, autonomous driving sensor models, and realistic visual graphics.
In the driving simulator, the steering torque feedback and pedal haptic feedback considerably influenced the driving accuracy, environmental awareness, and realism [43,44]. The overall configuration of the steering system and controller is illustrated in Figure 4. A steering system that can output a maximum torque of 17 Nm was designed to implement real vehicle torque feedback by connecting a reduction gearbox to a smart motor with a nominal power of 615 W. An encoder and a torque sensor mounted on the shaft were used to measure the steering input of the driver.
In autonomous driving, the control algorithm manipulated the steering hardware, and the steering angle (as measured by the encoder) was input into the simulation. In manual driving, the steering was controlled by a torque control mode consisting of a road reaction torque calculated in the vehicle dynamics simulator (CarMaker) and an assist torque that reduced the steering load. The latter was inversely proportional to the driving speed, which provided comfortable steering at low speeds and stable control at high speeds.

2.2. Driver Monitoring System (DMS)

Research has been conducted to determine the drowsiness, fatigue, and distraction of drivers using a DMS [45,46]. In manual driving, methods to use steering and pedal input patterns without additional sensors have been proposed [47,48]. The state of the driver must be determined by measuring vital signs such as gaze, electroencephalograms (EEGs), electrocardiograms (ECGs), skin temperature, and respiration because driver control input is not present in the autonomous driving mode [49]. Moreover, determining abnormal conditions such as the driver’s excitement or drowsiness by calculating the heart rate (HR) and heart rate variability (HRV) is also possible with ECG data [50,51].
Methods to measure the driver’s workload or drowsiness using a breathing pattern have been proposed [52,53]. Some studies have reported that sleep can stabilize the rhythm without causing significant changes in the respiratory rate [54]. Therefore, monitoring the driver’s condition using biometric signals requires further analysis of experimental data. In addition, a sensing technology that can acquire data in a noncontact manner must be developed [55].
In this study, a respiration and ECG sensor was attached to the driver to analyze data patterns in the manual driving, autonomous driving, and takeover scenarios. As shown in Figure 5, biosignal data were acquired while driving, and peaks were detected in real-time to calculate heart rate and respiration rate. According to the UNECE ALKS regulation [5], four conditions (driver control input, eye blinking, eye closing, and conscious head or body movement) have been cited as examples for determining driver availability. Furthermore, it has been suggested that gaze direction toward the front road or rear-view mirror or head movement toward the driving task could be considered as a criterion for determining the driver’s attentiveness. Studies predicting driving behavior from gaze patterns [56] or measuring cognitive workload during driving [57,58,59] have been conducted.
In this study, the driver’s gaze data were acquired using a Tobii eye tracker having an accuracy of less than 0.1° and a sampling rate of 90 Hz. Thus, the location information that the driver is watching on the simulation screen at a level could be used for research using this device via calibration as in Figure 6a [60]. The eye-tracking system was developed by referring to the hardware mounting condition and calibration method proposed in Gibaldi’s study [41]. Figure 6b shows the results of eye tracking when manually driving the experimental course using the MATLAB heatmap function. In this figure, area A represents the simulator screen displayed as the front screen, area B is the cluster monitor between the steering and screen, and area C is the location of the touch display used for NDRT work.

2.3. Driver–Vehicle Interface (DVI)

In 2016, the National Highway Traffic Safety Administration (NHTSA) proposed overall DVI development guidelines [61] including visual, auditory, and haptic interfaces. In 2018, additional human factor design guidance was mentioned, especially for Level 2 and Level 3 automated vehicles [62].
Studies on the use and content of text, shape and color of icons, and display position for visual notifications have been conducted [63,64,65]. For auditory notifications, the type of sound (voice message or simple beep), volume, period, and direction were considered [66]. For haptic notification, the part where feedback is provided (steering, pedal, seat, or seat belt), the strength of the feedback, and the provision of spatial information were researched [66,67].
In this study, the visual, auditory, and haptic interfaces were constructed in the simulator by referring to the NHTSA DVI design guidelines [62], UNECE ALKS proposal [5], and previous studies [63,64,65,66,67]. Visual information was provided by the vehicle cluster monitor shown in Figure 7a and the heads-up display (HUD) on the front driving screen shown in Figure 7b. The cluster monitor can be vertically divided into three areas: A, B, and C. The primary information provided by each area, respectively, was as follows:
  • Vehicle speed, gear status, gaze recognition status LED (green = open eyes, red = closed eyes).
  • Autonomous driving system status (text message, image), emergency stoplight, driving map.
  • Steering control/forward lookup request (red = request, off = do not request), autonomous driving mode LED (green = autonomous, off = manual). The current vehicle speed and steering control request icon were displayed through the HUD.
The auditory notification was provided as three different simple beeps: driving mode change, forward gaze request, and takeover request. The takeover request alarm gradually increased in frequency depending on the time remaining until autonomous driving was incapacitated, thereby sending an alert to the driver. Finally, the warning was delivered such that it strengthened the vibration intensity and frequency of the vibration module installed in the driver’s seat of the simulator, considering the time until system failure. According to NHTSA’s DVI guidelines, vibrotactile seat displays are highly likely to be detected by most drivers.

2.4. Autonomous Driving Disengagement Simulations

A total of 39 participants (20 women, 19 men) between 24 and 49 years of age (M = 34.69, SD = 8.24) with driving experience ranging from 1 to 29 years (M = 9.95, SD = 8.56) participated in the simulation. They were university students, housemakers, teachers, office workers, or people in driving-related occupations. As shown in Figure 8, the overall experimental process followed the order: predrive questionnaire, sensor attachment (calibration), practice driving, partially automated driving experiment, and postdrive questionnaire.
The partially automated driving experiment was conducted on a complex urban and high-speed road with a total length of approximately 40 km. Drivers watched their favorite YouTube videos and other content or used their smartphones in the autonomous driving mode. In most related studies, participants performed specific secondary tasks such as N-back and surrogate reference tasks. However, if the available time increased because of the spread of automated driving cars, a gradual increase in time consumption, as shown in Figure 9, was also expected to occur in the car [68]. It was considered highly likely that the participants would use smartphones or tablet PCs to perform simple tasks such as online meetings and mailings or use applications such as SNS, games, streaming services, and the Internet.
Scenarios wherein ALKS can respond are defined in the UNECE proposal. In this experiment, events in which the autonomous driving system failed to operate at a level outside the scenario scope were configured on the driving course. The categories of the disengagement scenarios were as follows:
A:
A small dynamic obstacle suddenly enters the driving lane from the side.
B:
Vehicles with abnormal behavior poses a risk of collision from the front or side.
C:
Driving lanes are reduced because of long construction areas or objects falling in front of the vehicle.
During one round of the course, 12 scenarios that fell into the above three types occurred. In three repeated driving tasks, a single modality notification and two or three complex modality notifications were used to provide takeover requests provided 5 s before the time to collision (TTC) or time to disengagement (TTD). According to the recommendations of the NHTSA DVI guidance [62] and UNECE ALKS proposal [5], when both buttons located on the steering were pressed simultaneously for more than a certain period (1 s here), the mode switched from manual to autonomous if the operation was possible. Conversely, if both buttons were pressed, the steering operated over a specific torque, or if the brake pedal was pressed, the driving mode changed to manual. After completing driving, the participants completed a subjective questionnaire evaluation focusing on the sense of stability, comfort, willingness, and commercialization of the automated driving system.

3. Takeover Experiment Results and Discussion

When comparing driver responsiveness, some abnormally large or small values can significantly distort the average value because each dependent variable differs slightly. The outliers in the collected data were removed using median absolute deviation (MAD), which is a filtering technique that removes outliers using the median of the sample data and distributes the sample normally. A method for calculating the M A D for N data and standardizing the ith data x i is given as
M A D = 1 N × i = 1 N | x i x ¯ |
z i = x i x ¯ M A D .
In a scenario where the autonomous driving system is urgently canceled (when the driver is focusing on NDRT), the minimum required time for disengagement and effective notification methods can be determined by analyzing the time for takeover, eye movement, and vehicle control inputs (Table 1). The mean and SD values of the response variables of the drivers for each notification method are presented in Table 2 and Table 3. The data distribution is visualized as a box-and-whisker plot. In addition, for each variable, we evaluated whether there was a significant difference according to the notification method through analysis of variance (ANOVA). Regardless of the notification method in the takeover process, the scenario types that recorded a high accident rate were classified and analyzed for common characteristics.
In addition to the numerical simulation results, a questionnaire was evaluated for each notification method considering five items: the degree of stability, concentration improvement, understanding of the notification, overall satisfaction, and applicability to actual autonomous vehicles. A score from 1 (bad) to 7 (good) was quantitatively selected for each time. The survey results can be used as a basis for determining a notification method when the driver’s response times were similar.
We determined whether the corresponding biometric data could serve as a criterion for determining driving concentration by analyzing the difference between the driver’s breathing and heart rate in manual and autonomous driving. Finally, the change in takeover safety was observed by evaluating driver availability at regular intervals and providing a warning by referring to the UNECE ALKS proposal.

3.1. Single Modality Takeover Request

As described in Figure 8 and Section 2.4, single modality takeover experiments were conducted in the following order: predrive questionnaire, sensor attachment, practice manual driving, and partially automated driving experiment. Twelve event scenarios occurred while driving the course in Figure 3, and one of three single notifications was provided. Figure 10 shows the drivers’ response to 264 takeover scenarios using a single modality notification method of visual, auditory, or haptic notification. Since we analyzed driver reaction after the notification was provided, the case wherein the driver was looking ahead for more than 1 s before the notification was excluded from the analysis.
None of the six variables showed distinct differences depending on the notification method as assessed with one-way analysis of variance (ANOVA) (tTOR, p = 0.14; tEyeIn, p = 0.556; tEyeMv, p = 0.903; Ap, p = 0.58; Bp, p = 0.73; Steer, p = 0.479). Analysis of variance is a hypothesis testing method used when comparing three or more groups. One-way ANOVA tests for significant differences between groups under the conditions of one independent variable and one dependent variable. This experiment evaluated whether the “type of notification” caused a difference in each independent variable. As shown in the results of Table 2, the single modality notification method took about 2.8 s of takeover time and about 2 s of gaze road fixation time on average. Recipients of all three notification methods tended to press the brake pedal before the steering operation (70% applied the brake first, and the overall average difference was 0.24 s), and the time difference between the two inputs in the visual notification was the shortest at 0.17 s.
Contrary to the results of the shortest time to takeover after a single visual notification, the questionnaire evaluation showed that it had the most negative results among the three methods (Figure 11, average scores were 3.4, 4.9, 5.4). Most drivers with negative responses indicated that they could not freely focus on their secondary tasks in the autonomous driving mode as they had to constantly keep track of the cluster monitor to avoid missing the visual notification (76% opted for single visual notifications as an inefficient method). The haptic notification delivered through the vibrotactile sheet was the best in terms of improving concentration. However, there were cases of surprise or discomfort, as shown in Figure 12.

3.2. Multimodality Takeover Request

As shown in Figure 8, the multimodality takeover experiments were conducted in the same course as the single modality experiment. One of the complex notification methods was provided while driving the course twice. When a complex notification including visual, auditory, and haptic features was provided, the driver reaction to the takeover scenario was observed 377 times, and the results are shown in Figure 13. In the case of a complex modality composed of two notifications, no significant difference in any of the dependent variables with one-way ANOVA (tTOR, p = 0.065; tEyeIn, p = 0.22; tEyeMv, p = 0.98; Ap, p = 0.65; Bp, p = 0.10; Steer, p = 0.29) was observed, as with the single-modality result. However, a difference in terms of the increase in the switching time, brake, and steering operation timing was observed, which are important variables in the takeover response. If all three notifications were included in the multimodal notification, the brake timing (p = 0.034) and steering timing (p = 0.006) were distinct, while the takeover time (p = 0.058) was slightly different.
Regardless of the detailed notification configuration, when data were analyzed by dividing them into single, dual, and triple modalities, apparent statistical differences were observed (tTOR, tEyeIn, Bp, Steer: p < 0.0001, tEyeMv: p < 0.05). The results confirmed that driver responsiveness differed depending on the complexity of the notification rather than its type. The multimodal notification method took about 2 s of takeover time, and the road fixation time was within 1.5 s. Therefore, the warning delivery effect to the driver was superior to the single modality notification (Table 3).
As with the results of the single modality, the experimental results were good when a visual notification was included. However, there were many negative responses in the questionnaire, and the score was relatively low (Figure 14, average scores are 5.26, 5.31, 6.15, 6.52). In the case of auditory and haptic notifications, the overall score was similar to that when all notifications were provided, and the short-answer question about the effectiveness of the notification method in Figure 12 ranked second.
Among the ten questions in the postdrive questionnaire, items with similar meanings were classified into four categories: “efficient/helpful”, “inefficient/uncomfortable”, “familiar/preferred”, and “applicable to automated vehicles”, as shown in Figure 12. Overall, as the most positively scored method for takeover notification, all three modes, sound and vibration, and sound and visual notification were selected, in that order. Many responses indicated that visual notifications were easy to miss if the participants did not keep an eye on the screen. Furthermore, there were cases wherein the sound notifications were not delivered well because of the heavy noise while driving, or they were mixed with the sound of the NDRT content. The vibrotactile seat notification, which is always in contact, was the most effective for getting the driver’s attention, such that it was the most distinguished from the general driving scenario. However, there were negative evaluations of this notification method, which pointed out that it caused discomfort or anxiety (due to surprise) depending on the location or intensity of the vibration.
We analyzed the correlation between personal preference and takeover performance, which has been suggested as a subject for further study. Each participant’s quantitative questionnaire evaluation results were regarded as personal preferences, and the correlation with gaze response and takeover time was verified. For correlation analysis, Pearson correlation coefficients were calculated. There was a clear correlation between the subjective preference for notification method and the gaze road fixation time, with a Pearson correlation coefficient of approximately 31% for r > 0.7 and 61.5% for r > 0.3. There was also a very strong correlation between road fixation time and takeover time (54% for r > 0.7, and 39% for r > 0.3). Therefore, it is valid to reflect subjective preference to improve takeover performance. Effectively delivering takeover warnings by providing visual–sound–vibration notifications sequentially is possible depending on the degree of urgency based on TTC or TTD. In addition, notifying each person more effectively by selecting the type or location of the notification within a specific range according to driver preference is possible.

3.3. Accident Rate of Scenario

In this section, the accident rate that occurred during a total of 641 takeover situations in single and multimodality takeover alarm experiments is discussed according to each scenario. When an accident occurred within 3 s after the requested driver started manual control, it was assumed that the takeover process had failed (Figure 15a). Scenarios with a takeover failure rate of more than 5% included B3 (small dark obstacle on the forward driving road), A1/B2 (combined scenario: inability to drive autonomously due to road construction/worker crossing from the side), A5 (small wild animals dashing into the roadway), and A2 (unexpected pedestrian crossing at the bus stop). Scenarios with a high accident rate had a common point in that a small obstruction that was not conspicuous directly obstructed the path of the autonomous vehicle. The driver who received the warning had difficulty understanding the situation even though the same amount of time was available as in scenarios such as loss of lanes and construction sections that recorded a low accident rate. Therefore, the system must decide whether to execute the emergency behavior depending on both the time to disengagement and the type of cause. For high-risk cases, advancing the timing of the minimum risk maneuver (MRM) is necessary when the driver does not respond to the takeover request, and an emergency maneuver (EM) is required to avoid imminent collision risk.
Among the 641 takeover cases, 37 (5.8%) accidents occurred, and in the case of multimodality notification, it took approximately 2 s to takeover. Therefore, there was some margin compared to the time the notification was provided (TTD < 5 s). Nevertheless, about 47.6% of drivers assessed that the notification time was insufficient, and they requested an additional time of about 1.67 s on average (Figure 15b).

3.4. Driver Availability

The UNECE ALKS proposal recommended checking driver availability at regular intervals based on eye blinking and head or body movement and providing an immediate warning if these values are unavailable. We analyzed the difference in driver responses when driver availability warnings were provided periodically and when they were not provided. The driver availability experiment was conducted with 19 participants recruited in the second experiment schedule out of 39 participants. The experimental scenarios were designed with more serious avoidance difficulty than in the previous takeover experiments to observe the difference in the accident rate. The takeover request was provided in the same way as a multimodal notification, and the experimental sequence was randomly performed to exclude the learning effect caused by performing repeated experiments.
Figure 16 shows the success rate of the takeover, response time, and concentration on the NDRT according to driver availability warning. When the driver was forced to look ahead periodically, the reaction time was higher than 0.9 s, and the accident rate was 23 percentage points lower (Table 4). The focus on NDRT work during the entire driving time decreased by nearly 50 percentage points, from approximately 80% to 30%. Therefore, increasing the safety of highly automated vehicles is possible by applying a function that requests the driver to periodically check the driving situation. Current commercialized partially automated vehicles only check whether the driver is holding the steering wheel. However, it is also essential to check whether the driving situation is observed through the gaze or head direction for a certain amount of time (3 s here). A follow-up study on a method that can guide the driver to more rapidly and effectively grasp the current driving situation is needed.

3.5. Driver Biosignals

During all the takeover experiments, we collected driver biosignal data and attempted to reduce the error caused by driver movement through the filtering process The raw data of each sensor described in Section 2.2 were processed to measure the respiration rate, heart rate, and NDRT concentration while driving (Figure 17). We conducted t-tests to determine whether there were significant differences in the drivers’ heart rate or respiration rate in manual driving and autonomous driving situations. First, since there were 30 or more data points in each set of data, normality was assumed by the central limit theorem, and homogeneity of variance was confirmed from Levene’s test. For heart rate and respiration rate, the difference according to driving mode was confirmed by an independent sample t-test under the assumption of equal variance.
Figure 18 shows the average and change in heart rate for each driver for a total of 1021 min of manual driving and 1476 min of autonomous driving. The heart rate is generally lower during autonomous driving (M = 62.96, SD = 18.27) than that during manual driving (M = 64.29, SD = 18.81). However, the difference was not statistically significant with the t-test of the two groups (p = 0.81). The respiratory rate decreased slightly during autonomous driving (M = 20.35, SD = 3.49) compared to during manual driving (M = 20.73, SD = 3.35). The difference in the t-test was not significant (p = 0.49). Therefore, the use of heart rate and respiration rate as criteria for directly judging concentration on driving behavior is inappropriate. It is better to use them for identifying abnormal conditions such as drowsiness or shock.

3.6. Takeover Strategy

Experimental results on the DVI configuration method, accident rate by scenario type, and effectiveness of DMS in the takeover process are summarized as follows.
  • The effect of takeover request notification type is clearly distinguished for single/dual/triple modalities (in Section 3.2, tTOR, tEyeIn, Bp, Steer: p < 0.0001, tEyeMv: p < 0.05 with ANOVA for single/dual/triple modality analysis).
  • Notification modalities preferred by drivers are better at inducing driver concentration (in Section 3.2, driver preference and gaze road fixation time, 31% for r > 0.7, and 61.5% for r > 0.3 with Pearson’s r).
  • Depending on the type of system failure cause, the timing of the transition to emergency behavior (MRM, EM) must be different (in Section 3.3, cases with high driver recognition difficulty).
  • The safety of the transition process is improved if the DMS causes the driver to check the driving scenario periodically (in Section 3.4, improvement of reaction time and takeover failure with DMS).
A flow chart of the takeover process of a highly automated vehicle is provided in Figure 19 based on the results of all the experiments. At the beginning of the procedure, the system, which receives the driver’s willingness to use through DVI, determines whether to wear a seat belt, and satisfies the system operational design domain (ODD). Then, it decides to switch to the autonomous driving mode. While the autonomous mode is active, the system continuously checks the driver availability with gaze and head orientation information and system operating conditions. The system periodically makes the driver aware of the driving environment to improve the reaction speed and success rate in takeover scenarios.
When an autonomous driving system detects a failure because of various causes, the following two responses are possible:
First, if there is sufficient time for the takeover request, visual–auditory–haptic notifications are additionally provided based on TTD (e.g., visual → visual and auditory → visual, auditory, and haptic). Furthermore, the warning effect can be enhanced based on driver preference within a certain range for the notification intensity and the order of addition. If the driver cannot respond within the time or the situation worsens, severe system failure occurs, and the MRM must be initiated immediately.
Second, the EM should be carried out in scenarios where it is difficult for the driver to determine the cause or in imminent collision risk. The EM should not be terminated unless the imminent collision risk disappears or until the driver deactivates the system. The proposed strategy can improve the safety of the takeover process and improve the process of switching control rights, which could be the most dangerous moment in an autonomous driving system of Level 3 or higher. Furthermore, improving the willingness to use an autonomous vehicle is possible by providing a sense of psychological stability to the user.

4. Conclusions

The purpose of this study was to develop a takeover strategy that can maximize safety by considering the factors affecting the takeover process partially covered in previous studies. We attempted to improve the secondary task missions and limited driving scenarios previously used for quantitative evaluation. To realize a situation in which a highly automated vehicle is used as much as possible, we designed driving scenarios in which disengagement occurs by referring to disengagement reports and traffic accident statistics data. Drivers were asked to take over while using a smartphone or watching content through a monitor (like the real scenarios) in the autonomous driving mode.
There are three critical considerations for takeover safety:
  • Delivering notifications through a complex configuration of the driver–vehicle interface is effective. At this time, the effectiveness of the notification delivery may vary according to driver preferences.
  • In situations where it is difficult for the driver to immediately understand the cause of the disengagement, it is better to enter the emergency behavior right away.
  • The risk of accidents can be reduced using a driver monitoring system to ensure that the driver is not completely distant from the driving task.
In future studies, we plan to analyze in detail the types of scenarios that require emergency behavior and specifically design and evaluate minimum risk maneuvers. In addition, research on DVI that can help the driver to understand the situation effectively is necessary when a driving situation check or takeover request is provided.

Author Contributions

Conceptualization, D.Y.; software, D.Y. and C.P.; validation, D.Y.; formal analysis, D.Y. and H.C.; investigation, D.Y., D.K. and H.C.; data curation, D.Y.; writing—original draft preparation, D.Y.; writing—review and editing, D.Y.; visualization, D.Y. and D.K.; supervision, C.P. and S.-H.H.; funding acquisition, S.-H.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Ministry of Science and ICT, Korea, under the Information Technology Research Center support program (IITP-2021-2018-0-01426) supervised by the Institute for Information & Communications Technology Planning & Evaluation and the program for fostering next-generation researchers in engineering of the National Research Foundation of Korea funded by the Ministry of Science and ICT (2017H1D8A2031628).

Institutional Review Board Statement

Ethical review and approval were waived for this study since there was no threat to the health and life of the participants. The participants did not take any medications, drugs, or other medical treatments.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Acknowledgments

The authors are very grateful to the journal editors and reviewers.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Skilton, M.; Hovsepian, F. The 4th Industrial Revolution: Responding to the Impact of Artificial Intelligence on Business; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  2. Ghahari, S.; Assi, L.; Carter, K.; Ghotbi, S. The Future of Hydrogen Fueling Systems for Fully Automated Vehicles. In Proceedings of the International Conference on Transportation and Development 2019: Innovation and Sustainability in Smart Mobility and Smart Cities, Alexandria, VA, USA, 9–12 June 2019; pp. 66–76. [Google Scholar]
  3. SAE International (J3016_201609). Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. 2018. Available online: https://saemobilus.sae.org/content/j3016_201609 (accessed on 16 July 2021).
  4. Jenssen, G.D.; Moen, T.; Johnsen, S.O. Accidents with Automated Vehicles-Do Self-Driving Cars Need a Better Sense of Self? In Proceedings of the 26th ITS World Congress, Singapore, 21–25 October 2019; pp. 21–25. [Google Scholar]
  5. United Nations Economic Commission for Europe (UNECE). Proposal for a New UN Regulation on Uniform Provisions Concerning the Approval of Vehicles with Regards to Automated Lane Keeping System. 2020. Available online: https://unece.org/fileadmin/DAM/trans/doc/2020/wp29grva/GRVA-06-02r4e.pdf (accessed on 16 July 2021).
  6. California DMV. Testing of Autonomous Vehicles. Available online: https://www.dmv.ca.gov/portal/vehicle-industry-services/autonomous-vehicles/ (accessed on 16 July 2021).
  7. California DMV. Disengagement Report 2018. 2018. Available online: https://www.dmv.ca.gov/portal/vehicle-industry-services/autonomous-vehicles/disengagement-reports/ (accessed on 16 July 2021).
  8. California DMV. Disengagement Report 2020. 2020. Available online: https://www.dmv.ca.gov/portal/vehicle-industry-services/autonomous-vehicles/disengagement-reports/ (accessed on 16 July 2021).
  9. Favarò, F.; Eurich, S.; Nader, N. Autonomous vehicles’ disengagements: Trends, triggers, and regulatory limitations. Accid. Anal. Prev. 2018, 110, 136–148. [Google Scholar] [CrossRef]
  10. Braunagel, C.; Rosenstiel, W.; Kasneci, E. Ready for take-over? A new driver assistance system for an automated classification of driver take-over readiness. IEEE Intell. Transp. Syst. Mag. 2017, 9, 10–22. [Google Scholar] [CrossRef]
  11. Euro NCAP. Euro NCAP 2025 Roadmap. 2020. Available online: https://cdn.euroncap.com/media/30700/euroncap-roadmap-2025-v4.pdf (accessed on 16 July 2021).
  12. Biondi, F.; Coleman, J.; Cooper, J.; Strayer, D. Average heart rate for driver monitoring systems. Int. J. Hum. Factors Ergon. 2016, 4, 282–291. [Google Scholar] [CrossRef]
  13. Kiashari, S.E.H.; Nahvi, A.; Homayounfard, A.; Bakhoda, H. Monitoring the variation in driver respiration rate from wakefulness to drowsiness: A non-intrusive method for drowsiness detection using thermal imaging. J. Sleep Sci. 2018, 3, 1–9. [Google Scholar]
  14. Wang, Y.; Reimer, B.; Dobres, J.; Mehler, B. The sensitivity of different methodologies for characterizing drivers’ gaze concentration under increased cognitive demand. Transp. Res. Part F Traffic Psychol. Behav. 2014, 26, 227–237. [Google Scholar] [CrossRef]
  15. Niezgoda, M.; Tarnowski, A.; Kruszewski, M.; Kamiński, T. Towards testing auditory–vocal interfaces and detecting distraction while driving: A comparison of eye-movement measures in the assessment of cognitive workload. Transp. Res. Part F Traffic Psychol. Behav. 2015, 32, 23–34. [Google Scholar] [CrossRef]
  16. Yang, F.; He, Z.; Guo, S.; Fu, Y.; Li, L.; Lu, J.; Jiang, K. Non-contact driver respiration rate detection technology based on suppression of multipath interference with directional antenna. Information 2020, 11, 192. [Google Scholar] [CrossRef] [Green Version]
  17. Morales-Alvarez, W.; Sipele, O.; Léberon, R.; Tadjine, H.H.; Olaverri-Monreal, C. Automated driving: A literature review of the take over request in conditional automation. Electronics 2020, 9, 2087. [Google Scholar] [CrossRef]
  18. Christian Müller-Tomfelde, V. Takeover at Level 3 Automated Driving. 2019. Available online: https://www.visteon.com/wp-content/uploads/2019/01/takeover-at-level-3-automated-driving.pdf (accessed on 16 July 2021).
  19. Wu, C.; Wu, H.; Lyu, N.; Zheng, M. Take-over performance and safety analysis under different scenarios and secondary tasks in conditionally automated driving. IEEE Access 2019, 7, 136924–136933. [Google Scholar] [CrossRef]
  20. Favaro, F.; Eurich, S.; Rizvi, S.; Mahmood, S.; Nader, N. Analysis of Disengagements in Semi-Autonomous Vehicles: Drivers’ Takeover Performance and Operational Implications; Mineta Transportation Institute: San Jose, CA, USA, 2019. [Google Scholar]
  21. Park, M.; Son, J. Reference test scenarios for assessing the safety of take-over in a conditionally autonomous vehicle. Trans KSAE 2017, 27, 309–317. [Google Scholar] [CrossRef]
  22. Lee, J.; Yun, H.; Kim, J.; Baek, S.; Han, H.; Maryam FakhrHosseini, S.; Vasey, E.; Lee, O.; Jeon, M.; Yang, J.H.; et al. Design of single-modal take-over request in SAE level 2 & 3 automated vehicle. Trans. Korean Soc. Automot. Eng. 2019, 27, 171–183. [Google Scholar]
  23. Richardson, N.T.; Flohr, L.; Michel, B. Takeover Requests in Highly Automated Truck Driving: How Do the Amount and Type of Additional Information Influence the Driver–Automation Interaction? Multimodal Technol. Interact. 2018, 2, 68. [Google Scholar] [CrossRef] [Green Version]
  24. Lorenz, L.; Kerschbaum, P.; Schumann, J. Designing take over scenarios for automated driving: How does augmented reality support the driver to get back into the loop? In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Chicago, IL, USA, 27–31 October 2014; pp. 1681–1685. [Google Scholar]
  25. Radlmayr, J.; Gold, C.; Lorenz, L.; Farid, M.; Bengler, K. How Traffic Situations and Non-Driving Related Tasks Affect the Take-Over Quality in Highly Automated Driving. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Chicago, IL, USA, 27–31 October 2014; pp. 1681–1685. [Google Scholar]
  26. Gold, C.; Körber, M.; Lechner, D.; Bengler, K. Taking over control from highly automated vehicles in complex traffic situations: The role of traffic density. Hum. Factors 2016, 58, 642–652. [Google Scholar] [CrossRef]
  27. Zeeb, K.; Buchner, A.; Schrauf, M. Is take-over time all that matters? The impact of visual-cognitive load on driver take-over quality after conditionally automated driving. Accid. Anal. Prev. 2016, 92, 230–239. [Google Scholar] [CrossRef] [PubMed]
  28. Zeeb, K.; Buchner, A.; Schrauf, M. What determines the take-over time? An integrated model approach of driver take-over after automated driving. Accid. Anal. Prev. 2015, 78, 212–221. [Google Scholar] [CrossRef]
  29. Eriksson, A.; Stanton, N.A. Takeover time in highly automated vehicles: Noncritical transitions to and from manual control. Hum. Factors 2017, 59, 689–705. [Google Scholar] [CrossRef]
  30. Yun, S.; Teshima, T.; Nishimura, H. Human–Machine Interface Design and Verification for an Automated Driving System Using System Model and Driving Simulator. IEEE Consum. Electron. Mag. 2019, 8, 92–98. [Google Scholar] [CrossRef]
  31. Bianchi Piccinini, G.; Lehtonen, E.; Forcolin, F.; Engström, J.; Albers, D.; Markkula, G.; Lodin, J.; Sandin, J. How do drivers respond to silent automation failures? Driving simulator study and comparison of computational driver braking models. Hum. Factors 2020, 62, 1212–1229. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Blommer, M.; Curry, R.; Swaminathan, R.; Tijerina, L.; Talamonti, W.; Kochhar, D. Driver brake vs. steer response to sudden forward collision scenario in manual and automated driving modes. Transp. Res. Part F Traffic Psychol. Behav. 2017, 45, 93–101. [Google Scholar] [CrossRef]
  33. Petermeijer, S.; Doubek, F.; de Winter, J. Driver Response Times to Auditory, Visual, and Tactile Take-Over Requests: A Simulator Study with 101 Participants. In Proceedings of the 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, Canada, 5–8 October 2017; pp. 1505–1510. [Google Scholar]
  34. Lotz, A.; Russwinkel, N.; Wohlfarth, E. Response times and gaze behavior of truck drivers in time critical conditional automated driving take-overs. Transp. Res. Part F Traffic Psychol. Behav. 2019, 64, 532–551. [Google Scholar] [CrossRef]
  35. Bruck, L.; Haycock, B.; Emadi, A. A review of driving simulation technology and applications. IEEE Open J. Veh. Technol. 2021, 2, 1–16. [Google Scholar] [CrossRef]
  36. Irwin, C.; Iudakhina, E.; Desbrow, B.; McCartney, D. Effects of acute alcohol consumption on measures of simulated driving: A systematic review and meta-analysis. Accid. Anal. Prev. 2017, 102, 248–266. [Google Scholar] [CrossRef]
  37. Iwata, M.; Iwamoto, K.; Kitajima, I.; Nogi, T.; Onishi, K.; Kajiyama, Y.; Nishino, I.; Ando, M.; Ozaki, N. Validity and reliability of a driving simulator for evaluating the influence of medicinal drugs on driving performance. Psychopharmacology 2021, 238, 775–786. [Google Scholar] [CrossRef]
  38. Wang, Y.; Mehler, B.; Reimer, B.; Lammers, V.; D’Ambrosio, L.A.; Coughlin, J.F. The validity of driving simulation for assessing differences between in-vehicle informational interfaces: A comparison with field testing. Ergonomics 2010, 53, 404–420. [Google Scholar] [CrossRef]
  39. Zhang, Y.; Guo, Z.; Sun, Z. Driving simulator validity of driving behavior in work zones. J. Adv. Transp. 2020, 2020, 1–10. [Google Scholar] [CrossRef]
  40. Blana, E. Driving Simulator Validation Studies: A Literature Review; Institute of Transport Studies, University of Leeds: Leeds, UK, 1996. [Google Scholar]
  41. Tomasevic, N.; Horberry, T.; Young, K.; Fildes, B. Validation of a driving simulator for research into human factors issues of automated vehicles. J. Australas. Coll. Road Saf. 2019, 30, 37–44. [Google Scholar] [CrossRef]
  42. Kaptein, N.A.; Theeuwes, J.; Van Der Horst, R. Driving simulator validity: Some considerations. Transp. Res. Rec. 1996, 1550, 30–36. [Google Scholar] [CrossRef]
  43. Samiee, S.; Nahvi, A.; Azadi, S.; Kazemi, R.; Hatamian Haghighi, A.; Ashouri, M.R. The effect of torque feedback exerted to driver’s hands on vehicle handling—A hardware-in-the-loop approach. Syst. Sci. Control. Eng. 2015, 3, 129–141. [Google Scholar] [CrossRef] [Green Version]
  44. Liu, A.; Chang, S. Force Feedback in a Stationary Driving Simulator. In Proceedings of the 1995 IEEE International Conference on Systems, Man and Cybernetics. Intelligent Systems for the 21st Century, Vancouver, BC, Canada, 22–25 October 1995; pp. 1711–1716. [Google Scholar]
  45. Schwarz, C.; Gaspar, J.; Miller, T.; Yousefian, R. The detection of drowsiness using a driver monitoring system. Traffic Inj. Prev. 2019, 20, S157–S161. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Kim, W.; Jung, W.-S.; Choi, H.K. Lightweight driver monitoring system based on multi-task mobilenets. Sensors 2019, 19, 3200. [Google Scholar] [CrossRef] [Green Version]
  47. Friedrichs, F.; Yang, B. Drowsiness Monitoring by Steering and Lane Data Based Features under Real Driving Conditions. In Proceedings of the 2010 18th European Signal Processing Conference, Aalborg, Denmark, 23–27 August 2010; pp. 209–213. [Google Scholar]
  48. Li, Z.; Li, S.E.; Li, R.; Cheng, B.; Shi, J. Online detection of driver fatigue using steering wheel angles for real driving conditions. Sensors 2017, 17, 495. [Google Scholar] [CrossRef] [Green Version]
  49. Khan, M.Q.; Lee, S. A comprehensive survey of driving monitoring and assistance systems. Sensors 2019, 19, 2574. [Google Scholar] [CrossRef] [Green Version]
  50. Vicente, J.; Laguna, P.; Bartra, A.; Bailón, R. Drowsiness detection using heart rate variability. Med Biol. Eng. Comput. 2016, 54, 927–937. [Google Scholar] [CrossRef] [PubMed]
  51. Furman, G.D.; Baharav, A.; Cahan, C.; Akselrod, S. Early Detection of Falling Asleep at the Wheel: A Heart Rate Variability Approach. In Proceedings of the 2008 Computers in Cardiology, Bologna, Italy, 14–17 September 2008; pp. 1109–1112. [Google Scholar]
  52. Karavidas, M.K.; Lehrer, P.M.; Lu, S.-E.; Vaschillo, E.; Vaschillo, B.; Cheng, A. The effects of workload on respiratory variables in simulated flight: A preliminary study. Biol. Psychol. 2010, 84, 157–160. [Google Scholar] [CrossRef]
  53. Hidalgo-Muñoz, A.R.; Béquet, A.J.; Astier-Juvenon, M.; Pépin, G.; Fort, A.; Jallais, C.; Tattegrain, H.; Gabaude, C. Respiration and heart rate modulation due to competing cognitive tasks while driving. Front. Hum. Neurosci. 2019, 12, 525. [Google Scholar] [CrossRef]
  54. Shinar, Z.; Akselrod, S.; Dagan, Y.; Baharav, A. Autonomic changes during wake-sleep transition: A heart rate variability based approach. Auton. Neurosci. 2006, 130, 17–27. [Google Scholar] [CrossRef] [PubMed]
  55. Kiashari, S.E.H.; Nahvi, A.; Bakhoda, H.; Homayounfard, A.; Tashakori, M. Evaluation of driver drowsiness using respiration analysis by thermal imaging on a driving simulator. Multimed. Tools Appl. 2020, 79, 17793–17815. [Google Scholar] [CrossRef]
  56. Husen, M.N.; Lee, S.; Khan, M.Q. Syntactic Pattern Recognition of Car Driving Behavior Detection. In Proceedings of the 11th International Conference on Ubiquitous Information Management and Communication, Beppu, Japan, 5–7 January 2017; pp. 1–6. [Google Scholar]
  57. Hogsett, J.; Kiger, S. Driver Workload Metrics Project: Task 2 Final Report; National Highway Traffic Safety Administration: Washington, DC, USA, 2006.
  58. Liao, Y.; Li, S.E.; Wang, W.; Wang, Y.; Li, G.; Cheng, B. Detection of driver cognitive distraction: A comparison study of stop-controlled intersection and speed-limited highway. IEEE Trans. Intell. Transp. Syst. 2016, 17, 1628–1637. [Google Scholar] [CrossRef]
  59. Liao, Y.; Li, G.; Li, S.E.; Cheng, B.; Green, P. Understanding driver response patterns to mental workload increase in typical driving scenarios. IEEE Access 2018, 6, 35890–35900. [Google Scholar] [CrossRef]
  60. Gibaldi, A.; Vanegas, M.; Bex, P.J.; Maiello, G. Evaluation of the Tobii EyeX Eye tracking controller and Matlab toolkit for research. Behav. Res. Methods 2017, 49, 923–946. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  61. Campbell, J.; Brown, J.; Graving, J.; Richard, C.; Lichty, M.; Sanquist, T.; Morgan, J. Human Factors Design Guidance for Driver–Vehicle Interfaces; National Highway Traffic Safety Administration: Washington, DC, USA, 2016; Volume 812, p. 360.
  62. Campbell, J.L.; Brown, J.L.; Graving, J.S.; Richard, C.M.; Lichty, M.G.; Bacon, L.P.; Morgan, J.F.; Li, H.; Williams, D.N.; Sanquist, T. Human Factors Design Guidance for Level 2 and Level 3 Automated Driving Concepts; National Highway Traffic Safety Administration: Washington, DC, USA, 2018.
  63. Baldwin, C.L.; Eisert, J.L.; Garcia, A.; Lewis, B.; Pratt, S.M.; Gonzalez, C. Multimodal urgency coding: Auditory, visual, and tactile parameters and their impact on perceived urgency. Work 2012, 41, 3586–3591. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. Walch, M.; Lange, K.; Baumann, M.; Weber, M. Autonomous Driving: Investigating the Feasibility of Car-Driver Handover Assistance. In Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Nottingham, UK, 1–3 September 2015; pp. 11–18. [Google Scholar]
  65. Eriksson, A.; Petermeijer, S.M.; Zimmermann, M.; De Winter, J.C.; Bengler, K.J.; Stanton, N.A. Rolling out the red (and green) carpet: Supporting driver decision making in automation-to-manual transitions. IEEE Trans. Hum. Mach. Syst. 2018, 49, 20–31. [Google Scholar] [CrossRef] [Green Version]
  66. Gold, C.; Damböck, D.; Lorenz, L.; Bengler, K. “Take Over!” How Long Does it Take to Get the Driver back into the Loop? In Proceedings of the 57th Human Factors and Ergonomics Society Annual Meeting, San Diego, CA, USA, 30 September–4 October 2013; pp. 1938–1942. [Google Scholar]
  67. Petermeijer, S.M.; Cieler, S.; De Winter, J.C. Comparing spatially static and dynamic vibrotactile take-over requests in the driver seat. Accid. Anal. Prev. 2017, 99, 218–227. [Google Scholar] [CrossRef] [PubMed]
  68. Company, T.N. The Nielsen Total Audience Report April 2020. 2020. Available online: https://www.nielsen.com/us/en/insights/report/2020/the-nielsen-total-audience-report-august-2020/ (accessed on 16 July 2021).
Figure 1. California DMV disengagement report summary [8,9].
Figure 1. California DMV disengagement report summary [8,9].
Applsci 11 06685 g001
Figure 2. Takeover process [17].
Figure 2. Takeover process [17].
Applsci 11 06685 g002
Figure 3. HITL simulator.
Figure 3. HITL simulator.
Applsci 11 06685 g003
Figure 4. Steering system and control system block diagram.
Figure 4. Steering system and control system block diagram.
Applsci 11 06685 g004
Figure 5. Biosignal acquisition while driving: (a) ECG (electrocardiography) and (b) respiration.
Figure 5. Biosignal acquisition while driving: (a) ECG (electrocardiography) and (b) respiration.
Applsci 11 06685 g005
Figure 6. Gaze tracking: (a) calibration targets; (b) gaze tracking results (manual driving heatmap).
Figure 6. Gaze tracking: (a) calibration targets; (b) gaze tracking results (manual driving heatmap).
Applsci 11 06685 g006
Figure 7. Visual interface: (a) cluster monitor and (b) HUD.
Figure 7. Visual interface: (a) cluster monitor and (b) HUD.
Applsci 11 06685 g007
Figure 8. Schematic of the timeline of the experiments.
Figure 8. Schematic of the timeline of the experiments.
Applsci 11 06685 g008
Figure 9. Average time spent per day [68].
Figure 9. Average time spent per day [68].
Applsci 11 06685 g009
Figure 10. Driver reactions of single modality TOR ( n v i s u a l = 103 , n a u d i t o r y = 69 , n h a p t i c = 92 ).
Figure 10. Driver reactions of single modality TOR ( n v i s u a l = 103 , n a u d i t o r y = 69 , n h a p t i c = 92 ).
Applsci 11 06685 g010
Figure 11. Survey evaluations for single modality TOR ( n = 39 ).
Figure 11. Survey evaluations for single modality TOR ( n = 39 ).
Applsci 11 06685 g011
Figure 12. Short answer questionnaire evaluation results.
Figure 12. Short answer questionnaire evaluation results.
Applsci 11 06685 g012
Figure 13. Driver reactions of multimodality TOR ( n A & V = 89 , n H & V = 62 , n H & A = 87 , n a l l = 139 ).
Figure 13. Driver reactions of multimodality TOR ( n A & V = 89 , n H & V = 62 , n H & A = 87 , n a l l = 139 ).
Applsci 11 06685 g013
Figure 14. Survey evaluations for multimodality TOR ( n = 39 ).
Figure 14. Survey evaluations for multimodality TOR ( n = 39 ).
Applsci 11 06685 g014
Figure 15. Accident rate and takeover time requirement ( n A 1 / B 2 = 70 , n A 2 = 72 , n A 3 = 61 , n A 4 = 41 , n A 5 = 44 , n B 1 = 83 , n B 3 = 46 , n C 1 = 68 , n C 2 = 66 , n C 3 = 48 , n C 4 = 42 ).
Figure 15. Accident rate and takeover time requirement ( n A 1 / B 2 = 70 , n A 2 = 72 , n A 3 = 61 , n A 4 = 41 , n A 5 = 44 , n B 1 = 83 , n B 3 = 46 , n C 1 = 68 , n C 2 = 66 , n C 3 = 48 , n C 4 = 42 ).
Applsci 11 06685 g015
Figure 16. Driving results according to driver availability warnings ( n w / = 50 , n w / o = 43 ).
Figure 16. Driving results according to driver availability warnings ( n w / = 50 , n w / o = 43 ).
Applsci 11 06685 g016
Figure 17. Heart rate, respiration rate, and NDRT concentration data.
Figure 17. Heart rate, respiration rate, and NDRT concentration data.
Applsci 11 06685 g017
Figure 18. Heart rate and respiration rate in manual/autonomous driving ( n h e a r t = 46 , n r e s p i r a t i o n = 79 ).
Figure 18. Heart rate and respiration rate in manual/autonomous driving ( n h e a r t = 46 , n r e s p i r a t i o n = 79 ).
Applsci 11 06685 g018
Figure 19. Takeover strategy of highly automated vehicle.
Figure 19. Takeover strategy of highly automated vehicle.
Applsci 11 06685 g019
Table 1. Dependent variables measured in the study.
Table 1. Dependent variables measured in the study.
Dependent VariableVariable NameExplanation
Takeover timetTORTime between TOR (takeover request) and start of manual driving
Gaze road fixationtEyeInTime until the gaze reaches front driving area
Gaze departuretEyeMvTime until the gaze leaves NDRT monitor
Accel pedal inputAP timeTime until the driver presses accelerator pedal
Brake pedal inputBp timeTime until the driver presses brake pedal
Steering inputStr timeTime until the driver controls steering wheel
Table 2. Means and standard deviations of dependent variables.
Table 2. Means and standard deviations of dependent variables.
VisualAuditoryHaptic
Takeover time (tTOR)2.74 (1.51)2.86 (1.50)2.86 (1.09)
Gaze road fixation (tEyeIn)1.97 (1.27)2.06 (1.28)1.95 (1.32)
Gaze departure (tEyeMv)1.24 (0.88)1.29 (1.53)1.26 (1.34)
Accel pedal input (Ap time)4.63 (2.32)5.21 (2.36)5.11 (2.53)
Brake pedal input (Bp time)2.98 (1.83)3.17 (2.31)3.09 (2.52)
Steering input (Str time)3.15 (1.81)3.56 (2.12)3.30 (1.29)
Table 3. Mean and standard deviation of dependent variables.
Table 3. Mean and standard deviation of dependent variables.
Auditory and VisualHaptic and VisualHaptics and AuditoryAll
Takeover time (tTOR)2.03 (1.38)2.04 (1.31)2.12 (0.85)1.81 (0.87)
Gaze road fixation (tEyeIn)1.50 (1.44)1.45 (1.25)1.47 (1.10)1.23 (0.91)
Gaze departure (tEyeMv)0.91 (0.83)0.94 (0.98)0.93 (0.83)0.81 (0.77)
Accel pedal input (Ap time)4.90 (2.11)4.73 (1.71)5.21 (2.46)4.42 (2.44)
Brake pedal input (Bp time)2.19 (1.50)2.06 (1.34)2.34 (0.95)1.88 (1.32)
Steering input (Str time)2.55 (1.78)2.34 (1.41)2.83 (1.63)2.04 (1.03)
Table 4. Driving results according to the request to look forward.
Table 4. Driving results according to the request to look forward.
w/ DMSw/o DMS
Takeover failure14%37.2%
Reaction timeMean1.18 s2.09 s
SD0.88 s1.59 s
NDRT percentage29.8%79.1%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yu, D.; Park, C.; Choi, H.; Kim, D.; Hwang, S.-H. Takeover Safety Analysis with Driver Monitoring Systems and Driver–Vehicle Interfaces in Highly Automated Vehicles. Appl. Sci. 2021, 11, 6685. https://doi.org/10.3390/app11156685

AMA Style

Yu D, Park C, Choi H, Kim D, Hwang S-H. Takeover Safety Analysis with Driver Monitoring Systems and Driver–Vehicle Interfaces in Highly Automated Vehicles. Applied Sciences. 2021; 11(15):6685. https://doi.org/10.3390/app11156685

Chicago/Turabian Style

Yu, Dongyeon, Chanho Park, Hoseung Choi, Donggyu Kim, and Sung-Ho Hwang. 2021. "Takeover Safety Analysis with Driver Monitoring Systems and Driver–Vehicle Interfaces in Highly Automated Vehicles" Applied Sciences 11, no. 15: 6685. https://doi.org/10.3390/app11156685

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop