Next Article in Journal
Primary Energy Use and Environmental Effects of Electric Vehicles
Previous Article in Journal
On-Board Liquid Hydrogen Cold Energy Utilization System for a Heavy-Duty Fuel Cell Hybrid Truck
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Driver Status Recognition System of Intelligent Vehicle Terminal Based on Deep Learning

College of Electrical Engineering, Nantong University, Nantong 226019, China
*
Author to whom correspondence should be addressed.
World Electr. Veh. J. 2021, 12(3), 137; https://doi.org/10.3390/wevj12030137
Submission received: 20 July 2021 / Revised: 21 August 2021 / Accepted: 23 August 2021 / Published: 27 August 2021

Abstract

:
Automobile safety driving technology is a hot topic in today’s society, which is very significant to the social transportation system. Vehicle driving behavior monitoring is the foundation and core of safe driving techniques. The research on existing vehicle safety technology can not only improve the understanding of current safe driving research progress, but also provide reference for future researchers. This paper proposes a state recognition system based on a three-dimensional convolutional neural network, which can identify several improper states frequently encountered by drivers during driving, including drinking, making phone calls, and smoking, and can also issue alarm interventions. The system takes the collected continuous video frame information as the input of the three-dimensional convolutional network, carries out multi-level feature extraction and spatio-temporal information fusion, and identifies the driver state according to the extracted spatio-temporal features. The state is judged by the facial feature points of the video stream, and the design of the video surveillance driver state recognition system is completed. Then, the driver status recognition is improved and optimized, and finally, the actual deployment of the driver status recognition system on the mobile terminal is completed. A large number of experimental results show that the driver status recognition system proposed in this paper has achieved upper identification accuracy.

1. Introduction

The rapid growth in car ownership over the past few years has led to a corresponding increase in traffic accidents. According to the World Health Organization (WHO), traffic accidents are among the top 10 causes of death globally [1]. Volvo’s accident report shows that nearly 90% of road traffic accidents are caused by human error. Advanced driver assistance systems are considered to be an effective solution to reduce human error and the corresponding traffic accident rate.
Several studies on monitoring driving behavior have been conducted in recent years. So far, the physiological detection of drivers mainly includes EEG signal detection [2], ECG signal detection [3], and EMG signal detection [4,5]. This detection method has strong real-time performance and reliable results. However, it is also obvious that the electrode is attached to the driver, which is not easy to accept and affects the normal driving of the driver, so it has no value of promotion. By contrast, the electrical signals of the eye [6] are easier to collect than EEG and can avoid slight noise, but they still need to be collected in real time by wearing a head device. The video-based driver state recognition system proposed in this paper collects the driver state through the vehicle camera and extracts the feature information [7] in the video by using the deep learning algorithm [8]. Then, the system identifies the driver state [9] based on this feature information, finds the abnormal state in time, and issues an alarm. The system proposed in this paper will not cause interference to the normal driving of the driver, and its practicability is very strong and easy to be popularized.
In the field of target detection [10], deep learning [11,12,13] has great advantages, which also promote the study of fatigue driving detection. Zhu et al. proposed a fatigue detection regression model based on EOG. The model uses a convolutional neural network (CNN) [14,15] for unsupervised feature learning expression, which replaces the process of artificial design feature extraction [16,17]. In addition, the linear dynamic system (LDS) [18,19] algorithm is used in post-processing to greatly reduce the unbeneficial interference. In 2017, Zeng et al. proposed a super-resolution reconstruction method to improve the convolutional neural network and apply it to a single image. It is a new network that combines a dense residual network and deconvolutional network [20,21]. In terms of processing single images and multi-level processing, it is easier to perform image reconstruction. Compared with the classic super-resolution reconstruction method, this algorithm has more advantages in the characteristics of edge integration sharpness and reconstructed image sharpness processing, and the peak value is improved. The signal-to-noise ratio improves the overall quality of the reconstructed image. However, the problem is that the edge detail processing of the reconstructed image is not good enough, so it lends itself well to the reconstruction of multiple small scenes [22]. In the field of image vision, the visual characteristics of a deep network show a different robustness from those of traditional manual design in different scenes [23]. In addition to this, it possesses better robustness and more significant prediction accuracy under the condition of multiple changes.
This paper classifies driver’s abnormal behaviors into the following categories: drinking, making phone calls, and smoking. Relevant public data sets were collected through the network and pretreated. Videos were divided into multiple consecutive frames, 15 frames were taken per second, and the unrelated video frames were removed. Techniques based on deep learning technology were used to complete the driver state recognition under the video monitoring system design [24], and the deep learning framework Pytorch [25,26] was used to handle good complete model training data sets. In addition, based on the improvement and optimization of driver state recognition technology, the accuracy of the model and the recognition speed were improved. Finally, the deployment of the driver status recognition system was completed on the Jetson nano development board.

2. Materials and Methods

2.1. Research Methods

2.1.1. Convolutional Neural Network

A neural network is a mathematical model of distributed information processing, which is stimulated by animal nerves. It was first proposed by the psychologist W. McCall-Loch and the mathematical logician W.Pitts, and is still in use today. The most basic component of the neural network is the neuron model [27], which is shown in Figure 1.
Each circle in Figure 1 represents a neuron, and each line represents the connection of a neuron, which is divided into many layers with links between each layer and none between the same layer [28]. It can be seen from the figure that the convolutional neural network is similar in process to the traditional classification method. It is just that the convolutional neural network does not need to manually extract features, but automatically learns relevant features through convolution operations, and then classifies them.
In the neural network of an organism, the neuron corresponds to the perceptron of the artificial neural network. The perceptron is composed of input human rights values, activation function, and output. A perceptron can have more than one input x, with a weight w. There are several options for activation functions, among which the most common one is:
s i g m o d = ( 1 1 + e w × x ) .
The output of the model is:
y = f w × x + b .
In Equation (2), w is the weight, x is the input, y is the output, and b is the offset.
In 1986, Hinton proposed backpropagation, in which new weights and other information were obtained by minimizing the error, and then the whole network parameters were updated. The learning rate λ (super parameter) is specified, and by multiplying the rate of change and the learning rate, information on how much each weight and the bias term changes after one or two training sessions is provided for the second training. This theory was responsible for an upsurge in the study of neural networks.
In recent years, the application of a convolutional neural network in image processing has been increasing. Due to the large amount of and variety in handwritten text, the accuracy of ordinary machine recognition is very low. However, the emergence of a convolutional neural network can solve this problem and greatly improve the accuracy. Mobile phone unlocking methods emerge in an endless stream, but in recent years, the face recognition unlocking scheme has become the most popular unlocking scheme, which is also inseparable from the development of the convolutional neural network. The most important algorithm of face recognition is based on CNN. With the proposed YOLOv3 model, image classification and recognition become more accurate and can automatically identify the target of interest, which plays an important role in both military and civilian fields.
Neural networks are composed of neural units, which include network weights and biases that can be learned. Each neural unit calculates the inputs and outputs, namely forward propagation, according to some existing formulas of the neural network [29], as shown in the Equation (3). Where w and b are the weights to be trained, x is the input, and s is the output. After comparing the output result with the sample output, the error value is obtained, as shown in the Equation (4), where d is the output and y is the truth value. The error value calculated by the mathematical model function is transmitted layer by layer through the hidden layer, and the error and weight are repeated, that is, the backpropagation, as shown in the Equation (5), where E is the error and w is the weight, until the error converges and meets the accuracy requirement.
S j = 0 m 1 w i j x i + b j ,
E ( w , b ) = 1 2 j = 0 n 1 ( d j y j ) 2 ,
w ( i , j ) = η E ( w , b ) w ( i , j ) .
The deep learning model is a multi-layer feature description method constructed by a convolutional layer and hidden layer. Deep learning models mainly include a convolutional neural network (CNN), a deep Boltzmann machine (DBM), a constrained Boltzmann machine (RBM), and other models [30].
The convolutional neural network has its special structural forms: a data layer, convolutional layer, pooling layer, activation layer, full connection layer, and an output result layer [31]. Among them, the convolutional layer is the core of the convolutional neural network. The convolutional layer is the convolution operation of the image, that is, covering the filter at a certain position of the image. The value is multiplied in the filter by the value of the corresponding pixel in the image. The above product is added up, and the sum is the value of the target pixel in the output image. This is repeated for all locations of the image. The activation layer is realized by the activation function, which adds nonlinear characteristics to the convolutional neural network and enables the neural network to approximate any nonlinear function arbitrarily. Common activation functions include Sigmoid function, Relu function, Tanh function, etc.
The pooling layer generally adopts the maximum pooling layer, and the maximum pooling layer takes the maximum value of the local area. It has the property that the maximum value of the corresponding local area does not change after a certain scale change, and the feature map is guaranteed to be invariant. In addition, the pooling layer can also reduce the dimension of features, reduce the amount of calculation, and speed up the reasoning speed.

2.1.2. 3D Convolutional Neural Network

The convolutional network for image recognition is generally a 2D convolutional neural network, while this paper focuses on video recognition with one dimension more information than the image. Therefore, this paper selects a 3D convolutional neural network for this study [32].
Compared with the 2D convolutional neural network, the 3D convolutional neural network is more suitable for driver state recognition. Through 3D convolution and 3D pooling, the 3D convolutional neural network can model time and space information. In the driver state recognition system, the 3D convolutional neural network has more advantages. In a 3D convolutional neural network, convolution and pooling operate simultaneously in time and space, whereas in the 2D convolutional neural network, convolution and pooling can only be operated in space. On the timeline, the 2D convolutional neural network has no time factor in the process of convolution. So, compared with the traditional low-dimensional convolution, 3D convolution is more suitable for multi-volume and persistent motion state recognition.
Figure 2 is a comparison diagram of the principles of a 2D convolutional neural network and a 3D convolutional neural network.
Through comparison, it can be seen that when processing a single image and video stream, the model processed by the 2D convolutional neural network outputs a single image. When the video stream is convolved to output a single image, the time information is lost, and the information on the video time axis cannot be merged. However, in contrast to the 3D convolutional neural network, the output after the input of the video stream is a complete 3D feature map, which also contains time and space information. Therefore, the 3D convolutional neural network is suitable for driver state recognition in the surveillance video studied in this paper.
A C3D convolutional neural network model is selected in this paper. This model is a 3D convolutional neural network for behavior recognition in video. It is characterized by a simple model, which can fully extract the time and space information of video and has a relatively high operating efficiency on the premise of ensuring accuracy. A C3D convolutional neural network consists of 7 parts. The first and second parts are composed of a convolutional layer and a pooling layer. The third to fifth parts are composed of two convolutional layers and one pooling layer. The sixth part is two full connection layers. The seventh part is the Softmax layer [33]. The Softmax function is shown in Equation (6). A C3D convolutional neural network is shown in Figure 3.
p ( y x ) = exp w y x c C exp w y x .

2.1.3. YOLOv3

The YOLOv3 model and YOLOv1 and YOLOv2 networks belong to the end-to-end network of the YOLO series. The YOLO network uses full-image information to make predictions. Unlike the sliding window method and the region proposal-based method, the YOLO network trains and predicts. In the process, it can make full use of the whole picture information for prediction, and can learn the generalized information of the target, which has a certain universality. Compared with the previous YOLO series network models, the YOLOv3 network model mainly achieves the best trade-off between detection speed and accuracy. Experiments show that on Tesla V100, the real-time detection speed of the MS COCO data set reaches 65 FPS, and the accuracy reaches 43.5% AP [34].
YOLOv3 is an efficient and powerful target detection network [35]. Existing papers have verified a large number of advanced technologies that affect target detection performance; the current advanced target detection method is improved to make it more effective and more suitable for single GPU training. These improvements include CBN, PAN, SAM, etc.
YOLOv3 divides the image into S × S grids, and the grid at the center of the target is responsible for completing the prediction of the target. In order to complete the detection of C-type targets, each grid needs to predict B bounding boxes and P conditional category probabilities (P = C), and output the confidence information that the bounding box contains the target and the accuracy of the bounding box. The calculation method of the confidence level corresponding to each bounding box is as follows:
D c o n f i d e n c e = P ( o ) × I p r e d t r u t h ,
where o is the detected target; P ( o ) is the probability that the detected target is contained in the grid; I p r e d t r u t h is the intersection ratio (IOU) of the predicted bounding box and the true bounding box. If the grid contains the target, that is, the center of the target falls within the grid, it is 1, otherwise it is 0; the category confidence corresponding to each bounding box is composed of the product of the confidence of each bounding box and the conditional category probability; the calculation method is:
E c o n f i d e n c e = P ( c l | o ) × P ( o ) × I p r e d t r u t h ,
where: c l is the category of the detected target; l is the category number, l = 1 , 2 , , C .
YOLO creatively combines the two stages of candidate area and object recognition, so you can see which objects are there and where they are at a glance. In fact, YOLO does not actually eliminate candidate areas but instead uses predefined candidate areas.
YOLO first used the ImageNet data set to pre-train the first 20-layer convolutional network, and then used the complete network to train and predict object recognition and location on the Pascal VOC data set.The network structure of YOLO is shown in the Figure 4.
The last layer of YOLO uses a linear activation function, while the rest of the layers are Leaky Relu. Drop out and data augmentation are used in training to prevent overfitting.
After the neural network structure is determined, the training effect is determined by the loss function and the optimizer. YOLO uses the ordinary gradient descent method as the optimizer. Equation (9) is the key to the loss function of YOLO:
λ coord i = 0 S 2 j = 0 B I ij obj x i x ^ i 2 + y i y i ^ 2 + λ coord i = 0 S 2 j = 0 B I ij obj w i w i 2 + h i h i 2 + i = 0 S 2 j = 0 B I ij obj C i C ^ i 2 + λ coord i = 0 S 2 j = 0 B I ij obj C i C ^ i 2 + i = 0 S 2 I ij obj c class p i ( c ) p i ( ^ c 2 .

2.1.4. Advantages of YOLOv3

Compared with the previous network, Yolov3 has a better backbone network (like Resnet) with better accuracy. It is worth noting that YOLOv3 has three boxes in each cell, and each box has five basic parameters. So, for a 416 × 416 picture, there are 845 bounding boxes in v2 and 10,467 in v3. In the cost function, YOLOv3 makes a modification: it does not use softmax (the softmax layer assumes that an image or an object belongs to only one category), but uses a logistic regression layer to classify each category, mainly using the sigmoid function, the output of which can be constrained in the range of 0 to 1. Therefore, when the output of a certain type of image after feature extraction is constrained by the sigmoid function, if it is greater than 0.5, it means that it belongs to that category, so that a box can predict multiple categories. In addition, after comparison, in the network comparison of various versions, I will not say more about the software advantages. The performance of Yolov3 is already sufficient to meet the needs of this experiment. In terms of hardware, although v4 and v5 have better performance, the investment in cost is larger than v3. In summary, YOLOv3 is the most cost-effective technique and is more suitable for this experiment.

2.2. Experiments

In this paper, the driver’s abnormal behavior is divided into the following categories: drinking, making phone calls, and smoking. The framework of the driver status recognition system is shown in Figure 5.

2.2.1. Experimental Environment Construction

This experiment was conducted under the Linux system, based on the Pytorch framework. Finally, we transplanted the trained network to a Jetson nano B01 development board and tested the driver’s attitude by calling the camera. We placed our equipment in the car without affecting the driver’s sight; the configuration status of the vehicle is shown in Figure 6.
NVIDIA released the Jetson nano development kit at the NVIDIA GPU Technology Conference in 2019. It has excellent image processing ability and an integrated CUDA function. The Jetson nano uses a four core 64 bit arm CPU and 128 core integrated NVIDIA GPU, which can provide 472 gflops computing performance. The Jetson nano has 4 GB lpddr4 memory in an efficient, low-power package with 5 W/10 W power mode and 5 V DC input [36]. We can migrate the system to the Jetson nano development board, benefiting from the powerful performance of the development board. The arrangement of experimental equipment is shown in Figure 7.
The newly released Jetpack 4.2 SDK provides a complete desktop Linux environment for a Jetson nano based on Ubuntu 18.04, with accelerated graphics, support for NVIDIA CUDA Toolkit 10.0, CUDNN 8.0, and Tensorrt. The SDK also includes the native installation of popular open-source machine learning (ML) frameworks, as well as frameworks for computer vision and robot development.
The project can be transplanted to the Jetson nano. It can be lightweight, more portable, and has rich peripheral resources. The Jetson nano provides real-time computer vision and reasoning for a variety of complex deep neural network (DNN) models. In the intelligent edge detection of the Internet of Things, device connection and system formation have their architecture. Even transfer learning can use the ML framework to retrain the network locally on the development kit.

2.2.2. Data Set Production

The training of the deep model requires a lot of data, and the data cannot have large similarities. In this study, the data set pictures we used were composed of open-source pictures on the Internet and photos taken by ourselves.
This design uses YOLO Mark to label pictures. YOLO Mark is YOLO’s data set labeling software, which is very convenient.
The prepared picture set should be divided into the training set, validation set and test set.
The training set is the parameters used to train the model, and the data samples used for model fitting. The validation set is a set of samples set aside separately during the model training process, which is used to train the hyperparameters of the model. Different combinations of hyperparameters correspond to different potential models. What runs on the verification set is actually a collection of models. The the verification set exists to find the best-performing model from this bunch of possible models. It can be used to adjust the hyperparameters of the model and to make a preliminary assessment of the model’s capabilities.
In the neural network, we used the verification data set to find the optimal network depth, or determine the stopping point of the backpropagation algorithm or select the number of hidden layer neurons in the neural network.
The commonly used cross-validation in ordinary machine learning is subdividing the training data set itself into different validation data sets to train the model. The test set is used to evaluate the generalization ability of the final model. However, it cannot be used as a basis for algorithm-related selection such as parameter tuning and selection of features.

3. Results

In this paper, Pytorch was the deep learning framework for algorithm development and experiment. It can be seen from the loss function curve in Figure 8 that as the number of training data increases, the loss continues to converge, and it basically converged at 35,000, and the change is not obvious. It shows that the training of the model is effective.
Based on the information contained in Figure 9, we can see that the accuracy constantly improves. Among them, the accuracy of making a call is the fastest to reach its peak, followed by drinking water, and smoking is the slowest; in the whole process, the accuracy of making a call and drinking water was above 90%, while the accuracy of smoking was slightly lower, perhaps this is due to the production of the data set.
Considering that the system needs to be deployed on the embedded development board Jeston nano, and the computational power of the development board is limited, we compressed the model from 32 bit floating-point operation to an 8 bit. The predicted identification results of this experiment are shown in the following nine subplots.
As shown in Figure 10, we selected part of the experimental results. The recognition predicted results of the smoking are shown in subplot Figure 10a–c. Then, we can see the predicted result of phone from subplot Figure 10d–f. Finally, subplot Figure 10g–i show the results of the identification for drinking.
This study then applied our model to a real-world scenario. Several volunteers participated in the production of the data of this actual scene. They were responsible for making the prescribed actions and simulating the irregular actions of the driver in the driving process. The acquisition system arranged in the real car was responsible for shooting and real-time analysis.
The test results of this experiment are shown in the Figure 11. This study selected several representative pictures to show. From the figure, we can see that the system analyzed and identified the several abnormal behaviors set by our experiment, and calibrated the props related to the abnormal behaviors very accurately.
In this experiment, we found three volunteer drivers with different image characteristics to conduct multiple experiments. In addition, this experiment was carried out on a closed road, which not only guaranteed the authenticity of the experiment, but also guaranteed the safety of volunteer drivers. The test results show that the test results of the three scenarios designed this time reached an average of more than 83%, and some test results reached 91%. Most of them still had a good detection accuracy, and some of the low detection accuracy may be affected by some occlusions in the complex actual scene.

4. Conclusions

This paper presents a driver state recognition system based on a 3D convolutional neural network. The model parameters of the system are generated by iterative learning of a large number of training samples. Through a large number of experiments, the recognition accuracy is better, but there is still room for improvement. In practical applications, the system uses the collected continuous video frame information as the input of the system. We conduct multi-level feature extraction and spatio-temporal information fusion by studying model parameters and 3D convolutional neural networks.The system will provide space and time for driver status recognition based on the extracted features, and provide early warning of a driver’s intervention being in poor status, so as to ensure people’s travel safety to a certain extent.

Author Contributions

The authors of this article all had a significant contribution to the work performed, including the following: conceptualization, L.W. and W.P.; data curation, W.P.; formal analysis, L.W. and W.P.; methodology, L.W. and W.P.; supervision, Y.X.; validation, L.W. and W.P.; writing—original draft, L.W. and W.P.; writing—review and editing, W.P. and L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China General Project: 61973178, National Natural Science Foundation-Smart Grid Joint Fund Key Project: U2066203, National Natural Science Foundation of China project number 6210020040 and the Nantong University talent introduction project: Research on high precision and strong robust machine vision detection technology.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hu, X.H.; Zheng, M.T.Y. Research Progress and Prospects of Vehicle Driving Behavior Prediction. World Electr. Veh. J. 2021, 12, 88. [Google Scholar] [CrossRef]
  2. Zhai, W.; Zhang, X.; Hou, H.; Meng, Q. Olfactory electroencephalogram signal recognitionbased on wavelet energy moment. Shengwu Yixue Gongchengxue Zazhi 2020, 3, 399–404. [Google Scholar]
  3. Zhai, X.L.; Tin, C. Automated ECG Classification Using Dual Heartbeat Coupling Based on Convolutional Neural Network. IEEE Access 2018, 6, 27465–27472. [Google Scholar] [CrossRef]
  4. Kaur, A. Wheelchair control for disabled patients using EMG/EOG based human machine interface: A review. IEEE Access 2020, 6, 1–22. [Google Scholar]
  5. Deo, R.C. Machine Learning in Medicine. Sensors 2015, 132, 1920–1930. [Google Scholar]
  6. Godara, M.; Sanchez-Lopez, A.; De Raedt, R. Manipulating avoidance motivation to modulate attention bias for negative information in dysphoria: An eye-tracking study. J. Behav. Ther. Exp. Psychiatry 2021, 3, 101613. [Google Scholar] [CrossRef] [PubMed]
  7. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 3, 84–90. [Google Scholar] [CrossRef]
  8. Chiroma, H.; Gital, A.Y.; Rana, N.; Abdulhammid, S.M.; Muhammad, A.N.; Umar, A.Y.; Abubakar, A.I. Nature Inspired Meta-heuristic Algorithms for Deep Learning: Recent Progress and Novel Perspective. Adv. Comput. Vis. 2019, 943, 59–70. [Google Scholar]
  9. Karpathy, A.; Toderici, G.; Shetty, S.; Leung, T.; Sukthankar, R.; Li, F.-F. Large-scale Video Classification with Convolutional Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 1725–1732. [Google Scholar]
  10. Ren, S.Q.; He, K.M.; Girshick, R.; Sun, J.L. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. Neural Inf. Process. Syst. 2015, 28, 91–99. [Google Scholar] [CrossRef] [Green Version]
  11. Alcaraz, J.C.; Moghaddamnia, S.; Peissig, J. Efficiency of deep neural networks for joint angle modeling in digital gait assessment. Eurasip J. Adv. Signal Process. 2021, 1, 10. [Google Scholar] [CrossRef]
  12. Yu, H.F.; Sun, Y.; Lu, D.; Cai, K.; Yu, X. Discrimination of lung cancer and adjacent normal tissues based on permittivity by optimized probabilistic neural network. Nan Fang Yi Ke Da Xue Xue Bao = J. South. Med. Univ. 2020, 59, 370–378. [Google Scholar]
  13. Bin, J.; Mei-xia, Q.; Qing-wei, L.; Shu-hua, C.; Yun-peng, Z. Study on Spectra Decomposition of White Dwarf Main-Sequence Binary Stars Based on Generation Antagonistic Network. Spectrosc. Spectr. Anal. 2020, 40, 3298–3302. [Google Scholar]
  14. Alharthi, A.S.; Yunas, S.U.; Ozanyan, K.B. Deep Learning for Monitoring of Human Gait: A Review. IEEE Sens. J. 2019, 19, 9575–9591. [Google Scholar] [CrossRef] [Green Version]
  15. Zeng, Y.; Dai, H.H. CT Image Segmentation of Liver Tumor with Deep Convolutional Neural Network. J. Med. Imaging Health Inform. 2021, 11, 337–344. [Google Scholar] [CrossRef]
  16. Mu, G.R.; Yang, Y.P.; Gao, Y.Z.; Feng, Q. Multi-scale 3D convolutional neural network-based segmentation of head and neck organs at risk. Nan Fang Yi Ke Da Xue Xue Bao = J. South Med. Univ. 2020, 40, 491–498. [Google Scholar]
  17. Li, X.K.; Liu, X.Y.; Li, Y.M.; Cao, H.; Chen, Y.; Lin, Y.; Huang, X. Human activity recognition based on the inertial information and convolutional neural network. Sheng Wu Yi Xue Gong Cheng Xue Za Zhi = J. Biomed. Eng. 2020, 37, 596–601. [Google Scholar]
  18. Li, Q.; Chen, Z.; He, J.J.; Hao, S.Y.; Wang, R.; Yang, H.T.; Sun, H.J. Reconstructing the 3D digital core with a fully convolutional neural network. Appl. Geophys. 2020, 17, 401–410. [Google Scholar] [CrossRef]
  19. Mousavi, S.A.S.; Zhang, X.; Seigler, T.M.; Hoagg, J.B. Characteristics That Make Linear Time-Invariant Dynamic Systems Difficult for Humans to Control. IEEE Trans. Human-Mach. Syst. 2021, 51, 141–151. [Google Scholar] [CrossRef]
  20. Dian-Wei, W.; Peng-Fei, H.; Jiu-Lun, F.; Ying, L.; Zhi-Jie, X.; Jing, W. Multispectral image enhancement based on illuminance-reflection imaging model and morphology operation. Acta Phys. Sin. 2018, 67, 21. [Google Scholar]
  21. Wu, Z.; Zeng, J.X. Aircraft target recognition in remote sensing image based on saliency map and multi feature. Chin. J. Image Graph. 2017, 22, 532–541. [Google Scholar]
  22. Hong, L.; Wei, W.; Xiao-Min, Y.; Bin-Yu, Y.; Kai, L.; Jeon, G. Multispectral image enhancement based on Retinex by using structure extraction. Chin. J. Image Graph. 2016, 65, 16. [Google Scholar]
  23. Ayachi, R.; Said, Y.; Abdelaali, A.B. Pedestrian Detection Based on Light-Weighted Separable Convolution for Advanced Driver Assistance Systems. Neural Process. Lett. 2020, 52, 2655–2668. [Google Scholar] [CrossRef]
  24. Jeong, Y.; Lee, B.; Han, J.H.; Oh, J. Ocular Axial Length Prediction Based on Visual Interpretation of Retinal Fundus Images via Deep Neural Network. IEEE J. Sel. Top. Quantum Electron. 2021, 27, 7200407. [Google Scholar] [CrossRef]
  25. Hu, S.M.; Liang, D.; Yang, G.Y.; Yang, G.W.; Zhou, W.Y. Jittor: A novel deep learning framework with meta-operators and unified graph execution. Sci. China Inf. Sci. 2020, 63, 118–138. [Google Scholar] [CrossRef]
  26. Ketkar, N. Introduction to PyTorch. In Deep Learning with Python; Apress: Berkeley, CA, USA, 2017. [Google Scholar]
  27. Ou, X.F.; Yan, P.C.; Wang, H.P.; Xu, B.; He, W.; Zhang, G.Y.; Xu, Z. Research on Moving Target Detection Method Based on Deep Frame Difference Convolutional Neural Network. Chin. J. Electron. 2020, 48, 2384–2393. [Google Scholar]
  28. Yuan, C.X.; Jia, D.N.; Zhou, S.H. Research and application of convolutional neural network in mining area prediction. J. Eng. Sci. 2020, 42, 1597–1604. [Google Scholar]
  29. Li, X.; Jiang, Y.; Li, M.; Yin, S. Lightweight Attention Convolutional Neural Network for Retinal Vessel Image Segmentation. IEEE Trans. Ind. Inform. 2021, 17, 1958–1967. [Google Scholar] [CrossRef]
  30. He, Z.Y.; Wang, Y.; Zhang, P.H.; Zuo, K.; Liang, P.F.; Zeng, J.Z.; Zhou, S.T.; Guo, L.; Huang, M.T.; Cui, X. Establishment and test results of an artificial intelligence burn depth recognition model based on convolutional neural network. Zhonghua shao shang za zhi/Chin. J. Burn. 2020, 3, 399–404. [Google Scholar]
  31. Firino, C.; Zhang, C.; Patras, P.; Banchs, A.; Widmer, J. A Machine-Learning-Based Framework for Optimizing the Operation of Future Networks. IEEE Commun. Mag. 2020, 58, 20–25. [Google Scholar] [CrossRef]
  32. Tang, Y.; Teng, Q.; Zhang, L.; Min, F.; He, J. Layer-Wise Training Convolutional Neural Networks With Smaller Filters for Human Activity Recognition Using Wearable Sensors. IEEE Sens. J. 2021, 21, 581–592. [Google Scholar] [CrossRef]
  33. Li, X.B.; Wang, W.Q. Learning discriminative features via weights-biased softmax loss. Pattern Recognit. 2020, 107, 107405. [Google Scholar] [CrossRef]
  34. Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar]
  35. Canepa, A.; Ragusa, E.; Zunino, R.; Gastaldo, P. T-RexNet-A Hardware-Aware Neural Network for Real-Time Detection of Small Moving Objects. Sensors 2021, 21, 1252. [Google Scholar] [CrossRef]
  36. Cass, S. Nvidia makes it easy to embed AI: The Jetson nano packs a lot of machine-learning power into DIY projects- Hands on. IEEE Spectr. 2020, 57, 14–16. [Google Scholar] [CrossRef]
Figure 1. Neural network model.
Figure 1. Neural network model.
Wevj 12 00137 g001
Figure 2. Schematic diagram of 2D and 3D convolutional neural network. (a) Applying 2D convolution on an image results in an image; (b) Applying 2D convolution on a video volume(multiple frames as multiple channels) also results in an image; (c) Applying 3D convolution on a video volume results in another volume, preserving temporal information of the input signal.
Figure 2. Schematic diagram of 2D and 3D convolutional neural network. (a) Applying 2D convolution on an image results in an image; (b) Applying 2D convolution on a video volume(multiple frames as multiple channels) also results in an image; (c) Applying 3D convolution on a video volume results in another volume, preserving temporal information of the input signal.
Wevj 12 00137 g002
Figure 3. C3D convolutional neural network structure. This architecture consists of 1 hardwired layer, 3 convolution layers, 2 subsampling layers, and 1 full connection layer. Among them, H is hardwired layer, C is convolution layer, S is subsampling layer, 7 @ 60 × 40 represents 7 continuous frames of 60 × 40 .
Figure 3. C3D convolutional neural network structure. This architecture consists of 1 hardwired layer, 3 convolution layers, 2 subsampling layers, and 1 full connection layer. Among them, H is hardwired layer, C is convolution layer, S is subsampling layer, 7 @ 60 × 40 represents 7 continuous frames of 60 × 40 .
Wevj 12 00137 g003
Figure 4. The network structure of YOLO.
Figure 4. The network structure of YOLO.
Wevj 12 00137 g004
Figure 5. Design block diagram of driver status recognition system.
Figure 5. Design block diagram of driver status recognition system.
Wevj 12 00137 g005
Figure 6. Commissioning status.
Figure 6. Commissioning status.
Wevj 12 00137 g006
Figure 7. Jetson Nano B01 development board.
Figure 7. Jetson Nano B01 development board.
Wevj 12 00137 g007
Figure 8. The loss curve.
Figure 8. The loss curve.
Wevj 12 00137 g008
Figure 9. Recognition accuracy.
Figure 9. Recognition accuracy.
Wevj 12 00137 g009
Figure 10. Forecast results. The recognition predicted results of the smoking are shown in subplot (ac). Then, we can see the predicted result of phone from subplot (df). Finally, subplot (gi) show the results of the identification for drinking.
Figure 10. Forecast results. The recognition predicted results of the smoking are shown in subplot (ac). Then, we can see the predicted result of phone from subplot (df). Finally, subplot (gi) show the results of the identification for drinking.
Wevj 12 00137 g010
Figure 11. Test results.
Figure 11. Test results.
Wevj 12 00137 g011
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xu, Y.; Peng, W.; Wang, L. Research on Driver Status Recognition System of Intelligent Vehicle Terminal Based on Deep Learning. World Electr. Veh. J. 2021, 12, 137. https://doi.org/10.3390/wevj12030137

AMA Style

Xu Y, Peng W, Wang L. Research on Driver Status Recognition System of Intelligent Vehicle Terminal Based on Deep Learning. World Electric Vehicle Journal. 2021; 12(3):137. https://doi.org/10.3390/wevj12030137

Chicago/Turabian Style

Xu, Yiming, Wei Peng, and Li Wang. 2021. "Research on Driver Status Recognition System of Intelligent Vehicle Terminal Based on Deep Learning" World Electric Vehicle Journal 12, no. 3: 137. https://doi.org/10.3390/wevj12030137

Article Metrics

Back to TopTop