# The Effects of Dynamic Complexity on Drivers’ Secondary Task Scanning Behavior under a Car-Following Scenario

^{*}

## Abstract

**:**

## 1. Introduction

#### 1.1. Background

#### 1.2. Literature Review

#### 1.2.1. Adaptive Vehicle Human–Computer Interaction Systems

#### 1.2.2. Secondary Task Carrying Capacity

#### 1.3. Study Aim

## 2. Materials and Methods

#### 2.1. Selection of Secondary Task and Design Principles

- (1)
- The number of rows and columns of the icon matrix should be consistent with the size of the background.

- (2)
- There should be an intelligent match between the icon area and the number of icons in the interface.

_{a}, and n

_{a}. The calculation method can be expressed as follows:

- (3)
- The icons are arranged symmetrically in the center of the interface.

_{1}(j,k), y

_{1}(j,k)) and lower right corner coordinates (x

_{2}(j,k), y

_{2}(j,k)). The calculation method can be expressed as follows:

#### 2.2. Evaluation Model of Secondary Task Carrying Capacity under a Car-Following Scenario

_{0}(i), the speed of the driver’s vehicle is v

_{1}(i), the single scanning time of the secondary task is t(i), the complexity of the secondary task is C, and the total scanning time required for the driver to operate the secondary task is T. The total scanning time is affected by the vehicle speed and spacing between the two vehicles and the complexity of the secondary task; therefore, T can be expressed by Equation (12) as follows:

- (1)
- The average single scanning time (including sight transfer time) should not exceed 2.2 s;
- (2)
- The scanning times of a single secondary task should not exceed four times;
- (3)
- The total scanning time of a single secondary task should not exceed 15 s.

## 3. Experimental Design and Data Acquisition

#### 3.1. Experimental Equipment

_{b}(i), and the time when the gaze point returning to area 1 is recorded as the end time of single scanning t

_{e}(i). The calculation formula of single scanning time t(i) can be expressed by Equation (15) as follows:

#### 3.2. Experimental Scheme

^{−2}and 0.6 m·s

^{−2}[31]). The road type in the scenario was a two-way, six-lane urban road (as shown in Figure 4), the length of the road was 20 km, the driver’s vehicle traveled in the middle lane, the traffic flow on both sides of the middle lane was 300 veh·h

^{−1}, and the average speed of the traffic flow was 60 km·h

^{−1}.

^{−1}, 30 km·h

^{−1}, 40 km·h

^{−1}, 50 km·h

^{−1}, 60 km·h

^{−1}, and 70 km·h

^{−1}, respectively. It traveled in the middle lane at a constant speed. At each vehicle speed, the spacing distances between the driver’s driving vehicle and the front vehicle were taken as 10 m, 15 m, 20 m, 25 m, 30 m, and 35 m, respectively. Drivers followed the front vehicle, and collisions and lane changes were not allowed in the whole process.

#### 3.3. Data Collection

## 4. Results

#### 4.1. Average Single Scanning Time

^{2}= 0.962), the relationship between the average single scanning time and the vehicle spacing can be expressed by a positive logarithmic regression model (R

^{2}= 0.992), and the relationship between the average single scanning time and the number of icons can be expressed by a positive linear regression model (R

^{2}= 0.735). The relationship between vehicle speed, vehicle spacing, the number of icons, and the average single scanning time meeting the upper limit of 95% confidence interval can be expressed by the multivariate nonlinear fitting model shown in Equation (16).

#### 4.2. Total Scanning Time

^{2}= 0.985), the relationship between the total scanning time and vehicle spacing can be expressed by a negative logarithmic regression model (R

^{2}= 0.903), and the relationship between the total scanning time and the number of icons can be expressed by a positive linear regression model (R

^{2}= 0.922). The relationship between vehicle speed, vehicle spacing, the number of icons, and the total scanning time meeting the upper limit of 95% confidence interval can be expressed by the multivariate nonlinear fitting model shown in Equation (17).

#### 4.3. Scanning Times

^{2}= 0.979), the relationship between scanning times and vehicle spacing can be expressed by a negative logarithmic regression model (R

^{2}= 0.974), and the relationship between scanning times and the number of icons can be expressed by a positive linear regression model (R

^{2}= 0.783). The relationship between vehicle speed, vehicle spacing, the number of icons, and the scanning times meeting the upper limit of 95% confidence interval can be expressed by the multivariate nonlinear fitting model shown in Equation (18).

## 5. Discussion

^{−}

^{1}and the vehicle spacing in the range of 10–35 m is calculated by Equation (20) and shown in Figure 9. The secondary task carrying capacity is rounded by the constraints of Equation (2). The maximum number of icons at different vehicle speeds within the range of 20–70 km·h

^{−}

^{1}and vehicle spacings within the range of 10–35 m is calculated as shown in Figure 10, and the specific values are shown in Table 2.

^{−}

^{1}, 50 km·h

^{−}

^{1}, 60 km·h

^{−}

^{1}, and 70 km·h

^{−}

^{1}, respectively.

## 6. Conclusions

- (1)
- The relationship between vehicle speed, vehicle spacing, the number of icons, and average single scanning time can be expressed by a negative logarithmic model, a positive logarithmic model, and a positive linear model, respectively. The relationship between vehicle speed, vehicle spacing, the number of icons, and total scanning time can be expressed by a positive exponential model, a negative logarithmic model, and a positive linear model, respectively. The relationship between vehicle speed, vehicle spacing, the number of icons, and scanning times can be expressed by a positive exponential model, a negative logarithmic model, and a positive linear model, respectively. Combined with the above relationships and the evaluation criteria for driving secondary tasks, we calculated the maximum number of icons at different vehicle speeds and vehicle spacings. In this way, we can dynamically adjust the number of icons in the central control screen under the car-following scenario, to avoid the occurrence of traffic accidents caused by attention overload.
- (2)
- The average single scanning time for secondary tasks shows a downward trend when the vehicle speed increases or the vehicle spacing decreases. In addition, when the number of icons in the secondary task increases, the average single scanning time shows an upward trend. This reveals that when the complexity of the traffic environment becomes higher, the driver actively increases the proportion of attention allocated to the main driving task, to ensure traffic safety. However, a highly complex secondary task will weaken this effect, resulting in the secondary task carrying capacity of drivers exceeding the safety threshold, thus easily leading to traffic accidents.
- (3)
- With the decrease in vehicle speed or the increase in vehicle spacing, the impact of these two influencing factors on the secondary task carrying capacity decreases gradually, leading to a marginal decreasing effect. Compared with vehicle speed, the impact of vehicle spacing on the secondary task carrying capacity is more sensitive. To ensure that the complexity of the secondary task does not exceed the driver’s carrying capacity on the premise that the central control interface can display icons, the vehicle spacing should not be less than 10 m, 13.5 m, 18.6 m, and 25.7 m when the vehicle speed is 40 km·h
^{−}^{1}, 50 km·h^{−}^{1}, 60 km·h^{−}^{1}, and 70 km·h^{−}^{1}, respectively.

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Iio, K.; Guo, X.; Lord, D. Examining driver distraction in the context of driving speed: An observational study using disruptive technology and naturalistic data. Accid. Anal. Prev.
**2021**, 153, 105983. [Google Scholar] [CrossRef] [PubMed] - Ma, J.; Gong, Z.; Tan, J.; Zhang, Q.; Zuo, Y. Assessing the driving distraction effect of vehicle HMI displays using data mining techniques. Transp. Res. Part F Traffic Psychol. Behav.
**2020**, 69, 235–250. [Google Scholar] [CrossRef] - Ahangari, S.; Jeihani, M.; Ardeshiri, A.; Rahman, M.M.; Dehzangi, A. Enhancing the performance of a model to predict driving distraction with the random forest classifier. Transp. Res. Rec.
**2021**, 2675, 612–622. [Google Scholar] [CrossRef] - Ning, Z.; Zhang, K.; Wang, X.; Guo, L.; Hu, X.; Huang, J.; Hu, B.; Ricky, Y.K.K. Intelligent edge computing in internet of vehicles: A joint computation offloading and caching solution. IEEE Trans. Intell. Transp. Syst.
**2020**, 22, 2212–2225. [Google Scholar] [CrossRef] - Ding, N.; Lu, Z.; Jiao, N.; Lu, L. Quantifying effects of reverse linear perspective as a visual cue on vehicle and platoon crash risk variations in car-following using path analysis. Accid. Anal. Prev.
**2021**, 159, 106215. [Google Scholar] [CrossRef] - Biondi, F.; Alvarez, I.; Jeong, K.A. Human-vehicle cooperation in automated driving: A multidisciplinary review and appraisal. Int. J. Hum.-Comput. Interact.
**2019**, 35, 932–946. [Google Scholar] [CrossRef] - Duric, Z.; Gray, W.D.; Heishman, R.; Li, F.; Rosenfeld, A.; Schoelles, M.J.; Schunn, C.; Wechsler, H. Integrating perceptual and cognitive modeling for adaptive and intelligent human-computer interaction. Proc. IEEE.
**2002**, 90, 1272–1289. [Google Scholar] [CrossRef] - Ekman, F.; Johansson, M.; Sochor, J. Creating appropriate trust in automated vehicle systems: A framework for HMI design. IEEE Trans. Hum.-Mach. Syst.
**2017**, 48, 95–101. [Google Scholar] [CrossRef] - Ulahannan, A.; Jennings, P.; Oliveira, L.; Birrell, S. Designing an adaptive interface: Using eye tracking to classify how information usage changes over time in partially automated vehicles. IEEE Access.
**2020**, 8, 16865–16875. [Google Scholar] [CrossRef] - Pinotti, D.; Piccinini, G.F.B.; Tango, F. Adaptive human machine interface based on the detection of driver’s cognitive state using machine learning approach. Intell. Artif.
**2014**, 8, 163–179. [Google Scholar] [CrossRef] - Oviedo, T.O.; Haque, M.M.; King, M.; Demmel, S. Driving behaviour while self-regulating mobile phone interactions: A human-machine system approach. Accid. Anal. Prev.
**2018**, 118, 253–262. [Google Scholar] [CrossRef] - de Naurois, C.J.; Bourdin, C.; Bougard, C.; Vercher, J. Adapting artificial neural networks to a specific driver enhances detection and prediction of drowsiness. Accid. Anal. Prev.
**2018**, 121, 118–128. [Google Scholar] [CrossRef] [PubMed] - Wang, X.; Zheng, X.; Chen, W.; Wang, F. Visual Human-Computer Interactions for Intelligent Vehicles and Intelligent Transportation Systems: The State of the Art and Future Directions. IEEE Trans. Syst. Man Cybern. Syst.
**2020**, 51, 253–265. [Google Scholar] [CrossRef] - Blomeyer, D.; Schulte-Gehrmann, A.L. Surface innovations for interiors of future vehicles. ATZ Worldw.
**2019**, 121, 48–51. [Google Scholar] [CrossRef] - Klumpp, M.; Zijm, H. Logistics innovation and social sustainability: How to prevent an artificial divide in Human-Computer Interaction. J. Bus. Logist.
**2019**, 40, 265–278. [Google Scholar] [CrossRef] - Klumpp, M.; Hesenius, M.; Meyer, O.; Ruiner, C.; Gruhn, V. Production logistics and human-computer interaction—state-of-the-art, challenges and requirements for the future. Int. J. Adv. Manuf. Technol.
**2019**, 105, 3691–3709. [Google Scholar] [CrossRef] [Green Version] - Riegler, A.; Wintersberger, P.; Riener, A.; Holzmann, C. Augmented Reality Windshield Displays and Their Potential to Enhance User Experience in Automated Driving. i-com J. Interact. Media
**2019**, 18, 127–149. [Google Scholar] [CrossRef] [Green Version] - Rahmati, Y.; Talebpour, A.; Mittal, A.; Fishelson, J. Game Theory-Based Framework for Modeling Human–Vehicle Interactions on the Road. Transp. Res. Rec.
**2020**, 2674, 701–713. [Google Scholar] [CrossRef] - Niu, J.; Wang, X.; Liu, X.; Wang, D.; Qin, H.; Zhang, Y. Effects of mobile phone use on driving performance in a multiresource workload scenario. Traffic Inj. Prev.
**2019**, 20, 37–44. [Google Scholar] [CrossRef] - Lin, R.; Liu, N.; Ma, L.; Zhang, T.; Zhang, W. Exploring the self-regulation of secondary task engagement in the context of partially automated driving: A pilot study. Transp. Res. Part F Traffic Psychol. Behav.
**2019**, 64, 147–160. [Google Scholar] [CrossRef] - Faure, V.; Lobjois, R.; Benguigui, N. The effects of driving environment complexity and dual tasking on drivers’ mental workload and eye blink behavior. Transp. Res. Part F Traffic Psychol. Behav.
**2016**, 40, 78–90. [Google Scholar] [CrossRef] - Hensch, A.; Rauh, N.; Schmidt, C.; Hergeth, S.; Naujoks, F.; Krems, J.F.; Keinath, A. Effects of secondary tasks and display position on glance behavior during partially automated driving. Transp. Res. Part F Traffic Psychol. Behav.
**2020**, 68, 23–32. [Google Scholar] [CrossRef] - Noble, A.M.; Miles, M.; Perez, M.A.; Guo, F.; Klauer, S.G. Evaluating driver eye glance behavior and secondary task engagement while using driving automation systems. Accid. Anal. Prev.
**2021**, 151, 105959. [Google Scholar] [CrossRef] [PubMed] - Metz, B.; Landau, A.; Just, M. Frequency of secondary tasks in driving–Results from naturalistic driving data. Saf. Sci.
**2014**, 68, 195–203. [Google Scholar] [CrossRef] - Metz, B.; Schömig, N.; Krüger, H.P. Attention during visual secondary tasks in driving: Adaptation to the demands of the driving task. Transp. Res. Part F Traffic Psychol. Behav.
**2011**, 14, 369–380. [Google Scholar] [CrossRef] - Guo, B.; Jin, L.; Sun, D.; Shi, J.; Wang, F. Establishment of the characteristic evaluation index system of secondary task driving and analyzing its importance. Transp. Res. Part F Traffic Psychol. Behav.
**2019**, 64, 308–317. [Google Scholar] [CrossRef] - Khawaja, M.A.; Chen, F.; Marcus, N. Analysis of collaborative communication for linguistic cues of cognitive load. Hum. Factors
**2012**, 54, 518–529. [Google Scholar] [CrossRef] - Chen, F.; Ruiz, N.; Choi, E.; Epps, J.; Khawaja, M.A.; Taib, R.; Yin, B.; Wang, Y. Multimodal behavior and interaction as indicators of cognitive load. ACM Trans. Interact. Intell. Syst.
**2013**, 2, 1–36. [Google Scholar] [CrossRef] - Khawaja, M.A.; Chen, F.; Marcus, N. Measuring cognitive load using linguistic features: Implications for usability evaluation and adaptive interaction design. Int. J. Hum.-Comput. Interact.
**2014**, 30, 343–368. [Google Scholar] [CrossRef] - Hu, H.; Cheng, M.; Gao, F.; Sheng, Y.; Zheng, R. Driver’s Preview Modeling Based on Visual Characteristics through Actual Vehicle Tests. Sensors
**2020**, 20, 6237. [Google Scholar] [CrossRef] - Rakha, H. Validation of Van Aerde’s simplified steadystate car-following and traffic stream model. Transp. Lett.
**2009**, 1, 227–244. [Google Scholar] [CrossRef]

**Figure 2.**The travel process of the driver’s vehicle and the front vehicle in car-following scenario when the driver is operating the secondary task.

**Figure 6.**Boxplot of average single scanning time at different vehicle speeds, vehicle spacings, and the number of icons.

**Figure 7.**Boxplot of total scanning time at different vehicle speeds, vehicle spacings, and the number of icons.

**Figure 8.**Boxplot of scanning times at different vehicle speeds, vehicle spacings, and the number of icons.

**Figure 9.**Secondary task carrying capacity of icons P(N) at different vehicle speeds and vehicle spacings.

Icons Matrix Dimension | p/pix | m_{a}/pix | n_{a}/pix | N |
---|---|---|---|---|

2 × 2 | 315 | 20 | 90 | 4 |

2 × 3 | 287 | 26 | 20 | 6 |

2 × 4 | 210 | 77 | 20 | 8 |

3 × 3 | 204 | 20 | 72 | 9 |

3 × 4 | 204 | 20 | 17 | 12 |

3 × 5 | 164 | 40 | 20 | 15 |

4 × 4 | 148 | 20 | 62 | 16 |

4 × 5 | 148 | 20 | 27 | 20 |

4 × 6 | 134 | 23 | 20 | 24 |

v/km·h^{−1} | 20 | 30 | 40 | 50 | 60 | 70 | ||
---|---|---|---|---|---|---|---|---|

N_{m} | ||||||||

d/m | ||||||||

10 | 20 | 12 | 4 | 0 | 0 | 0 | ||

15 | 24 | 16 | 12 | 6 | 0 | 0 | ||

20 | 24 | 24 | 16 | 12 | 6 | 0 | ||

25 | 24 | 24 | 20 | 16 | 9 | 0 | ||

30 | 24 | 24 | 24 | 16 | 12 | 6 | ||

35 | 24 | 24 | 24 | 20 | 16 | 9 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Wang, L.; Li, H.; Guo, M.; Chen, Y.
The Effects of Dynamic Complexity on Drivers’ Secondary Task Scanning Behavior under a Car-Following Scenario. *Int. J. Environ. Res. Public Health* **2022**, *19*, 1881.
https://doi.org/10.3390/ijerph19031881

**AMA Style**

Wang L, Li H, Guo M, Chen Y.
The Effects of Dynamic Complexity on Drivers’ Secondary Task Scanning Behavior under a Car-Following Scenario. *International Journal of Environmental Research and Public Health*. 2022; 19(3):1881.
https://doi.org/10.3390/ijerph19031881

**Chicago/Turabian Style**

Wang, Linhong, Hongtao Li, Mengzhu Guo, and Yixin Chen.
2022. "The Effects of Dynamic Complexity on Drivers’ Secondary Task Scanning Behavior under a Car-Following Scenario" *International Journal of Environmental Research and Public Health* 19, no. 3: 1881.
https://doi.org/10.3390/ijerph19031881