Next Article in Journal
The Electro-Fenton Process for Caffeine Removal from Water and Granular Activated Carbon Regeneration
Previous Article in Journal
Study on Chromium Uptake and Transfer of Different Maize Varieties in Chromium-Polluted Farmland
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Manta Ray Foraging Optimization with Transfer Learning Driven Facial Emotion Recognition

by
Anwer Mustafa Hilal
1,*,
Dalia H. Elkamchouchi
2,
Saud S. Alotaibi
3,
Mohammed Maray
4,
Mahmoud Othman
5,
Amgad Atta Abdelmageed
1,
Abu Sarwar Zamani
1 and
Mohamed I. Eldesouki
6
1
Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam bin Abdulaziz University, AlKharj 16278, Saudi Arabia
2
Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
3
Department of Information Systems, College of Computing and Information System, Umm Al-Qura University, Mecca 24382, Saudi Arabia
4
Department of Information Systems, College of Computer Science, King Khalid University, Abha 62529, Saudi Arabia
5
Department of Computer Science, Faculty of Computers and Information Technology, Future University in Egypt, New Cairo 11835, Egypt
6
Department of Information System, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, AlKharj 16278, Saudi Arabia
*
Author to whom correspondence should be addressed.
Sustainability 2022, 14(21), 14308; https://doi.org/10.3390/su142114308
Submission received: 4 October 2022 / Revised: 19 October 2022 / Accepted: 27 October 2022 / Published: 2 November 2022

Abstract

:
Recently, facial expression-based emotion recognition techniques obtained excellent outcomes in several real-time applications such as healthcare, surveillance, etc. Machine-learning (ML) and deep-learning (DL) approaches can be widely employed for facial image analysis and emotion recognition problems. Therefore, this study develops a Transfer Learning Driven Facial Emotion Recognition for Advanced Driver Assistance System (TLDFER-ADAS) technique. The TLDFER-ADAS technique helps proper driving and determines the different types of drivers’ emotions. The TLDFER-ADAS technique initially performs contrast enhancement procedures to enhance image quality. In the TLDFER-ADAS technique, the Xception model was applied to derive feature vectors. For driver emotion classification, manta ray foraging optimization (MRFO) with the quantum dot neural network (QDNN) model was exploited in this work. The experimental result analysis of the TLDFER-ADAS technique was performed on FER-2013 and CK+ datasets. The comparison study demonstrated the promising performance of the proposed model, with maximum accuracy of 99.31% and 99.29% on FER-2013 and CK+ datasets, respectively.

1. Introduction

With the drastic increase in urban population, smart cities become popular in several sectors such as environmental sustainability, healthcare transportation, etc. [1,2,3]. The transformation of traditional cities into smart cities will depend greatly on modern technology in computing paradigms, particularly machine learning (ML), Internet-of-Things (IoT), data mining (DM), and artificial intelligence (AI) [4]. In the future, each conglomerate network of traditional cities (for example, electricity, information, transportation, and so on) will be assisted by a large number of IoT devices, generating a wide variety of heterogeneous and unstructured data in return [5,6]. Drivers’ emotional state could affect their driving capability while driving vehicles. Because of the growing complexity of vehicles, identifying the emotions of drivers has become increasingly prominent [7]. To guarantee a more pleasant and secure ride, good infotainment could accurately identify the driver’s emotional state beforehand, making adjustments to the vehicle dynamics [8,9,10]. In smart cars, it is crucial to identify the emotion of the driver since the vehicle could make decisions regarding what to do in specific situations based on the driver’s psychological states (for instance autonomous driving, driving modes, and mood-altering songs) [11]. Emotions like disgust, sadness, fear, and anger influence the driver’s ability to cause road accidents. This system might assist to avoid road accidents and control the functions of the vehicles based on the driver’s emotions [12].
Driver emotion detection in advanced driver assistance systems (ADAS) can be achieved by the facial expression recognition (FER) technique [13]. FER can be accomplished by using a manual feature extraction method with ML-based classifier implementation or using deep neural network (DNN) or a hybrid model including SVM as classifiers to convolution neural network (CNN). The computer vision (CV) based DL method is widely employed for emotion monitoring and FER [14,15]. This study analysis of deep FER summarized facial expression dataset and collection environments, namely the Internet or laboratories. But the present work is implemented mainly on lab-captured datasets because of the deficiency of real-time datasets [16,17]. There exists no on-road driver facial data accessible for driver FER tasks, and the driving task might suppress facial expression. Because of these problems, there is a lack of on-road driver FER studies essential for automotive human-machine systems [18]. However, various aspects affect this type of model, from predicting the appropriate class of expression to accomplishing higher performance.
Though several FER models are available in the literature, there is still a need to improve recognition performance. At the same time, the trial-and-error selection of model parameters is a tedious process. Therefore, this study develops a Transfer Learning Driven Facial Emotion Recognition for Advanced Driver Assistance System (TLDFER-ADAS) technique. The TLDFER-ADAS technique helps proper driving and determines the emotion of drivers, which assures road safety. The TLDFER-ADAS technique initially performs contrast enhancement procedures to enhance image quality. In the TLDFER-ADAS technique, the Xception model was applied to derive feature vectors. For driver emotion classification, manta ray foraging optimization (MRFO) with a quantum dot neural network (QDNN) model was exploited in this work. The proposed system can be used to classify the different emotions of drivers such as anger, disgust, fear, happiness, sadness, surprise, and neutral. The experimental result analysis of the TLDFER-ADAS technique was performed on a benchmark dataset. In short, the paper’s contributions can be summarized as follows.
  • An intelligent TLDFER-ADAS technique encompassing preprocessing, Xception feature extraction, QDNN classification, and MRFO parameter tuning is presented for facial emotion classification;
  • To the best of our knowledge, the presented TLDFER-ADAS technique never existed in the literature;
  • Parameter tuning of the QDNN model using the MRFO algorithm helps in accomplishing significant classification performance;
  • The emotion recognition performance was validated on two facial datasets: FER-2013 and CK+ datasets.
The rest of the paper is organized as follows: Section 2 offers a brief survey of existing works; Section 3 introduces the proposed model; Section 4 provides a detailed performance validation; and Section 5 draws the conclusions.

2. Related Works

Naqvi et al. [19] presented a multimodal technique for aggressive behavior detection of drivers from remote places. This model depends upon the variations in gaze and facial emotions of drivers while driving with the use of near-infrared (NIR) camera sensors and an illuminator installed in vehicle. Driver’s aggressive and normal time-series data were collected while playing car racing and truck driving computer games, respectively, while using a driving game simulator. Paikrao et al. [20] proposed a frontend processing model for stress emotion identification in various noisy environments. The proposed model analyzed noisy speech emotions with the presence of background noise, extracted features using Mel-frequency cepstral coefficients (MFCC) features, and evaluated the overall system performance.
Jeong and Ko [21] developed a faster FER technique to monitor a driver’s emotion that was able to operate with a lower-specification device mounted on vehicles. For these purposes, a hierarchical weighted random forest (WRF) classification was trained by using the similarity of the sample dataset to increase its performance. Initially, facial landmarks discovered from geometric features and input images were extracted, which considered the spatial location among landmarks. In [22], a novel hybrid network structure was developed based on DNN and SVM to forecast among six as well as seven drivers’ emotions in dissimilar illumination conditions, poses, and occlusions to accomplish this purpose. To define the emotion, a combination of LBP and Gabor features was exploited for determining the feature and categorized by means of an SVM classification integrated with the CNN. Xiao et al. [23] developed a facial expression-based-on-road-driver-emotion detection network named FERDERnet. This technique split the on-road driver FER tasks into three mechanisms: a face recognition model which identifies the driver’s face, an augmentation-based resampling model which implements resampling and data augmentation, and an emotion recognition model that adopts a DCNN pre-trained on CK+ and FER data and later finetuned as a backbone for recognizing driver emotions.
Mehendale [24] developed a FER technique based on CNNs (FERC). The method can be classified into different parts: Initially, it eliminates the background from the picture, and then concentrate on extracting facial feature vectors. In FERC method, expressional vector (EV) is utilized for finding the five distinct kinds of regular facial expressions. The two-level CNN functions sequentially, and the final layer of perceptron alters the exponent values and weights with all the iterations. Generally, FERC varies from following strategy with single-level CNN, thus enhancing the performance.
Naqvi et al. [19] designed a multi-modal based technique to remotely identify driver aggressiveness for managing this problem. The presented technique depends on variations in the facial and gaze emotions of driver while driving with the use of a near-infrared (NIR) camera sensor and an illuminator mounted on vehicles. Normal time-series and driver aggression datasets are gathered while using a driving game simulator, playing truck driving computer, and car racing games, correspondingly. Oh et al. [25] presented a DL-based driver’s real emotion recognizer (DRER), to identify the drivers’ real emotions that could not be wholly recognized according to facial expressions. Li et al. [26] proposed a cognitive-feature-augmented driver emotion detection model that depends on deep networks and emotional cognitive methods. The convolution technique was selected to create the approach for driver emotion recognition, while concurrently considering cognitive process characteristics and the driver’s facial expression.

3. The Proposed Model

In this study, we have introduced a new TLDFER-ADAS technique for smart city environments. The TLDFER-ADAS technique aids in proper driving and determines the emotions of drivers, which assures road safety. The TLDFER-ADAS technique encompasses a sequence of operations like contrast enhancement, Xception-based extracting features, QDNN-based emotion classification, and MRFO-hyperparameter tuning. Figure 1 defines the overall block diagram of TLDFER-ADAS system.

3.1. Contrast Enhancement

The TLDFER-ADAS technique originally performed contrast enhancement procedure for enhancing image quality. The contrast limited adaptive histogram equalization (CLAHE) technique enhances the lower contrast problem for digital images. CLAHE performed better than adaptive histogram equalization (AHE) and normal histogram equalization (HE) [27]. Generally, CLAHE works with the constraint of contrast enhancement viz., generally accomplished by ordinary HE that results in noise enhancement. Therefore, with the constraint of contrast enhancement in HE, the desired outcome is accomplished in the case where noise performs a pivotal role by enhancing contrast, especially in healthcare images. In general, contrast enhancement is defined as the slope of function viz., input image intensity value to a desirable resulting image intensity. The contrast is limited by the constraint of the slope of the related function. Additionally, contrast enhancement is strongly connected to the histogram height at that intensity value. Therefore, limiting the slope and clipping the height of histogram is a similar function that controls contrast enhancement.

3.2. Xception Based Feature Extraction

In the TLDFER-ADAS technique, the Xception model is applied to produce feature vectors. In this study, the Xception network with a depthwise separable convolutional model was adopted as the CNN for the classification of driver emotions [28]. Xception is intended for separating spatial correlations and cross-channel correlations completely. Thus, it applies depth-wise separable convolutions comprising pointwise and depth-wise layers. The depth-wise part separates the channel completely and performs a size 3 × 3 convolutional process. This technique generates one feature map for all channels. The computation amount is decreased in a convolution of 3 × 3 size that needs significant computation, thereby avoiding the bottleneck phenomenon [29]. The pointwise layer implements a 1 × 1 convolution process on all channels of the depth-wise outputs. By using the depth-wise separable convolution, extracting feature was effectively implemented, and the amount of computation is decreased. Furthermore, a deep network was created by using approaches like batch normalization to Xception and skip connection of ResNet.
To adjust the hyperparameters, the Adam optimizer is used. Adam is a first order gradient based stochastic objective function optimization method [30]. Adam integrates the benefits of the RMSProp and AdaGrad methods; the former is utilized for sparse gradient problems, and the latter is utilized for unfixed and non-linear optimization problems. Adam has the advantage of low memory requirements, easier implementation, and high computing efficiency. Its gradient diagonal scaling is invariant, hence it is appropriate for resolving problems with parameters or large-scale data. For distinct variables, Adam can upgrade the weight of the NN iteratively and adjust the learning rate adaptively based on the training dataset.

3.3. Driver Emotion Recognition

For driver emotion classification, the QDNN model was exploited in this work. The QDNN comprises of single electron stuck inside a cage of atoms [31]. In each ANN architecture, the neuron obtains input from other processors via weighted connection, and calculates the output that is passed through other neurons. The calculated output y i of i - t h neurons is implemented on the signals { y j } from other neurons as:
y i = j w i j f j ( y j )
In Equation (1), w i j represents the weight between j - t h to i - t h   neurons and f j shows the activation function for j - t h neurons. Consequently, the mathematical formula can be represented in the following equation:
| Ω ( Y f ,   S ) = lim N ( y 0 , 0 ) y N + 1 Y f , S d y 1 d y N ( s ) ( m 2   π i n   Δ t ) N + 1 2 × exp   ( i Δ t n j = 0 N [ m 2 ( y j + 1 y j Δ t ) 2 V ( y j ) ] | Ω ( y 0 , 0 )
From the abovementioned, | Ω ( y 0 , 0 ) } indicates the input state of quantum model, and | Ω ( y f ,   S ) denotes the output state, at time t = S .   G shows the Green’s function that propagates the system forward in time, from an early location y 0 at time t = 0 to last location y f at time t = S . Equation (2) formulates G in Feynman path integral formulation of quantum mechanics. The dots are strongly associated with one another such that tunneling can be possible between any two adjacent dot molecules.
| Ω ( σ z ( N ,   Δ t ) ,   S ) } = σ z ( j Δ t ) exp ( i h j [ K σ X ( i Δ t ) + ( j Δ t ) σ z ( j Δ t ) ] ) I [ σ z ( t ) ] | Ω ( σ z ( 0 ) ) , 0
where ( t ) represents a finite set of sums over state of the polarization, σ z , at every time slice j Δ t at every time slice, the polarization is + 1 or 1 . Figure 2 showcases the framework of QDNN technique [32].
At the last level, the MRFO algorithm is used as a parameter tuning technique. The stimulation of MRFO is based on the smart foraging performances of MR. It takes 3 exclusive foraging rules of manta ray (MR) to detect a better food source [33]. MRFO can be functioned by 3 foraging performances like Cyclone foraging, Somersault foraging, and Chain foraging. A few numerical approaches are provided as follows. Once the plankton concentration has been increased, afterward the position is optimum where all the positions are upgraded by remarkable solution predictable. This numerical approach to chain foraging is written as:
C x d i m ( n + 1 ) = C x d i m ( n ) + r a n d · C b e s t d i m ( n ) + φ C b e s t d i m ( n ) C x d i m ( n ) ,       X = 1  
C x d i m ( n + 1 ) = C x d i m ( n ) + r a n d · C x 1 d i m ( n ) C x d i m ( n ) + φ C b e s t d i m ( n ) C x d i m ( n ) X = 2 N
whereas, signifies the location of xth individual at time n in d i m implies the dimensional, r a n d denotes the arbitrary vector in zero, and one, φ signifies the weighted co-efficient, represents the plankton with maximal concentration. The numerical notion of spiral-shaped events of MRs is defined in the subsequent equation:
C x ( n + 1 ) = C b e s t + r a n d · ( C x 1 ( n ) C x ( n ) + r a t · cos ( 2 π t ) · ( C b e s t C x ( n ) ) )
D x ( n + 1 ) = D b e s t + r a n d · ( D x 1 ( n ) D x ( n ) + r a t · cos ( 2 π t ) · ( D b e s t D x ( n ) ) )
This performance is upgraded to d space. The arithmetical method of cyclone foraging was defined as:
C x d i m ( n + 1 ) = C b e s t d i m + r a n d · ( C b e s t d ( n ) C x d ( n ) + α C b e s t d i m ( n ) C x d i m ( n ) ) ,   X = 1
C x d i m ( n + 1 ) = C b e s t d i m + r a n d · ( C b e s t d ( n ) C x d ( n ) + α C b e s t d i m ( n ) C x d i m ( n ) ) ,   X = 2 N  
A = 2 E r a n d 1 T t + 1 T · S i n ( 2 π r a n d 1 )
whereas α represents the weighted coefficient, T stands for the superior number of iterations, and r a n d 1 represents the rand value in zero and one [34]. All the individuals search for a new location apart from the state-of-the-art improved individual, with transfer of a new arbitrary place from the searching space location:
C r d i m = L B d i m + r a n d ( U B d i m L B d i m )
whereas r a n d denotes the random position, L B and U B signify the lower as well as upper limits of dimensional, correspondingly. All the MR proposes to move and somersault to a new location as follows:
C x d i m ( n + 1 ) = C x d i m ( n ) + s o m · r a n d 2 · ( C b e s t d i m ( n ) r a n d 3 C x d i m ( n ) ) ,   X = 1 N
whereas s o m depicts the somersault factor that chooses a somersault threshold of MRs and Som = rand2 and rand3 determine 2 random values in zero and one.
The MRFO methodology develops a fitness function (FF) for realizing higher classification results. It defines a positive integer for exemplifying a good efficiency of candidate results. During this case, the minimized classifier error rate was supposed that FF is written in Equation (13).
f i t n e s s ( x i ) = n u m b e r   o f   m i s c l a s s i f i e d   s a m p l e s T o t a l   n u m b e r   o f   s a m p l e s * 100

4. Results and Discussion

The proposed model is simulated using Python 3.6.5 tool on PC i5-8600k, GeForce 1050Ti 4GB, 16GB RAM, 250GB SSD, and 1TB HDD. The parameter settings are given as follows: learning rate: 0.01, dropout: 0.5, batch size: 5, epoch count: 50, and activation: ReLU. In this section, a brief emotion recognition performance of the TLDFER-ADAS approach is tested on two databases as FER-2013 and CK+ databases. The first FER-2013 dataset has 35,527 images and the CK+ dataset holds 636 images. The details related to the database are given in Table 1. Figure 3 illustrates some sample images.
The confusion matrices provided by the TLDFER-ADAS model on the FER-2013 dataset are reported in Figure 4. The results implied the improved emotion recognition outcomes of the TLDFER-ADAS model. It is noticeable that the TLDFER-ADAS model has proficiently identified seven distinct classes.
Table 2 and Figure 5 demonstrate a detailed outcome of the TLDFER-ADAS approach on the test FER-2013 dataset. The experimentation values inferred the enhanced emotion classification outcomes of the TLDFER-ADAS model. For instance, in the entire dataset, the TLDFER-ADAS model has reached average a c c u y of 99.29%, p r e c n of 95.80%, r e c a l of 96.69%, F s c o r e of 96.22%, and A U C s c o r e of 98.14%. Likewise, on 70% of TR dataset, the TLDFER-ADAS algorithm has attained average a c c u y of 99.29%, p r e c n of 95.69%, r e c a l of 96.68%, F s c o r e of 96.15%, and A U C s c o r e of 98.13%. At last, on 30% of TS dataset, the TLDFER-ADAS system has resulted in average a c c u y of 99.31%, p r e c n of 96.06%, r e c a l of 96.71%, F s c o r e of 96.36%, and A U C s c o r e of 98.15%.
The training accuracy ( T R a c c ) and validation accuracy ( V L a c c ) acquired by the TLDFER-ADAS system in FER-2013 database is exhibited in Figure 6. The simulation result pointed out the TLDFER-ADAS methodology has gained improved values of T R a c c and V L a c c . Notably, the V L a c c appears to exist superior to T R a c c .
The training loss ( T R l o s s ) and validation loss ( V L l o s s ) accomplished by the TLDFER-ADAS system under FER-2013 database are revealed in Figure 7. The simulation result referred that the TLDFER-ADAS system has gained reduced values of T R l o s s and V L l o s s . In certain, the V L l o s s is lesser than T R l o s s .
The confusion matrices gained by the TLDFER-ADAS approach on the CK+ dataset is conveyed in Figure 8. The outcomes inferred the better emotion recognition outcome of the TLDFER-ADAS method. It is obvious that the TLDFER-ADAS technique has capably recognized seven distinct classes.
Table 3 and Figure 9 reveal thorough results of the TLDFER-ADAS approach on the test CK+ dataset. These results represented the improved emotion classification outcomes of the TLDFER-ADAS model. For instance, on entire dataset, the TLDFER-ADAS technique has accomplished average a c c u y of 99.19%, p r e c n of 96.64%, r e c a l of 94.54%, F s c o r e of 95.53%, and A U C s c o r e of 96.95%. Likewise, on 70% of TR dataset, the TLDFER-ADAS system has depicted average a c c u y of 99.29%, p r e c n of 96.63%, r e c a l of 94.82%, F s c o r e of 95.69%, and A U C s c o r e of 97.12%. Lastly, on 30% of TS dataset, the TLDFER-ADAS approach has exhibited average a c c u y of 98.95%, p r e c n of 96.83%, r e c a l of 93.32%, F s c o r e of 94.87%, and A U C s c o r e of 96.27%.
The T R a c c and V L a c c acquired by the TLDFER-ADAS methodology under CK+ database is displayed in Figure 10. The simulation result stated that the TLDFER-ADAS algorithm has gained higher values of T R a c c and V L a c c . In certain instances, the V L a c c appeared better than that of T R a c c .
The T R l o s s and V L l o s s attained by the TLDFER-ADAS method under CK+ database are portrayed in Figure 11. The simulation results outperformed that of the TLDFER-ADAS methodology and achieved decreased values of T R l o s s and V L l o s s . In particular, the V L l o s s is less than T R l o s s .
An overall a c c u y   comparison of the TLDFER-ADAS model with other models on two datasets is given in Table 4. In Figure 12, brief emotion recognition results of the TLDFER-ADAS model with existing approaches are provided on FER-2013 dataset. The results implied that the improved FRCNN model has demonstrated the least improved results whereas the DNN and PGC models have exhibited certainly enhanced performance. On the other hand, the Asm-VM and FPD-NN models have demonstrated reasonable results with a c c u y of 97.91% and 97.03%, respectively. Finally, the TLDFER-ADAS model outperformed the other models with maximum a c c u y of 99.31%.
In Figure 13, a detailed comparative study of the TLDFER-ADAS model with existing approaches is given on CK+ dataset. The obtained values indicated that the improved FRCNN model has exhibited minimum outcomes whereas the Asm-VM and PGC models have displayed surely improved performance. At the same time, the DNN and FPD-NN models have confirmed equitable results with a c c u y of 96.03% and 96.22% respectively. Finally, the TLDFER-ADAS model has outpaced the other models with maximum a c c u y of 99.29%.
As shown in figures, the proposed model has shown enhanced performance on both datasets. For instance, on FER-2013 dataset, the proposed model has obtained maximum accuracy of 99.31% whereas the existing Asm-SVM model has attained accuracy of 97.91%, indicating an improvement of 1.4%. Similarly, on CK+ dataset, the proposed model has obtained maximum accuracy of 99.29% whereas the existing FPD-NN model has attained accuracy of 96.22%, indicating an improvement of 3.07%. These results confirmed the enhanced performance of the TLDFER-ADAS model over other models. The enhanced performance of the proposed model is due to the parameter tuning process.

5. Conclusions

In this study, we have introduced a new TLDFER-ADAS technique for smart city environment. The TLDFER-ADAS technique aids in proper driving and determines the emotion of drivers, which assures road safety. The TLDFER-ADAS technique originally performed contrast enhancement procedures for enhancing the image quality. In the TLDFER-ADAS technique, the Xception model was applied to produce feature vectors. For driver emotion classification, the MRFO with QDNN model was exploited in this work. The experimental result analysis of the TLDFER-ADAS technique was performed on a benchmark dataset. A widespread experimental analysis demonstrated the enhancements of the TLDFER-ADAS technique over other techniques.
In future, the emotion detection results can be improvised using the ensemble deep learning classifiers with optimal hyperparameter tuning strategies. In addition, the proposed model can be tested on large-scale real-time datasets in future. Moreover, the proposed model can be extended to the use of hybrid metaheuristic optimizers.

Author Contributions

Conceptualization, A.M.H.; Data curation, M.M.; Formal analysis, A.A.A. and M.I.E.; Funding acquisition, D.H.E.; Methodology, A.M.H. and M.I.E.; Project administration, A.M.H.; Resources, A.A.A. and A.S.Z.; Software, M.O., A.A.A. and A.S.Z.; Supervision, S.S.A.; Validation, A.M.H., S.S.A., M.M. and A.S.Z.; Visualization, M.O.; Writing–original draft, D.H.E., S.S.A. and M.M.; Writing–review & editing, M.O. and M.I.E. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Small Groups Project under grant number (168/43). Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2022R238), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code: (22UQU4210118DSR52).

Institutional Review Board Statement

This article does not contain any studies with human participants performed by any of the authors.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable to this article as no datasets were generated during the current study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kandeel, A.A.; Abbas, H.M.; Hassanein, H.S. Explainable model selection of a convolutional neural network for driver’s facial emotion identification. In Proceedings of the International Conference on Pattern Recognition, Milan, Italy, 10–15 January 2021; Springer: Cham, Germany, 2021; pp. 699–713. [Google Scholar]
  2. Gera, D.; Balasubramanian, S.; Jami, A. CERN: Compact facial expression recognition net. Pattern Recognit. Lett. 2022, 155, 9–18. [Google Scholar] [CrossRef]
  3. Li, W.; Cui, Y.; Ma, Y.; Chen, X.; Li, G.; Zeng, G.; Guo, G.; Cao, D. A Spontaneous Driver Emotion Facial Expression (Defe) Dataset for Intelligent Vehicles: Emotions Triggered by Video-Audio Clips in Driving Scenarios. In IEEE Transactions on Affective Computing; IEEE: Piscataway Township, NJ, USA, 2021. [Google Scholar]
  4. Bodapati, J.D.; Naik, D.S.; Suvarna, B.; Naralasetti, V. A Deep Learning Framework with Cross Pooled Soft Attention for Facial Expression Recognition. J. Inst. Eng. Ser. B 2022, 103, 1395–1405. [Google Scholar] [CrossRef]
  5. Deng, W.; Wu, R. Real-time driver-drowsiness detection system using facial features. IEEE Access 2019, 7, 118727–118738. [Google Scholar] [CrossRef]
  6. Dias, W.; Andaló, F.; Padilha, R.; Bertocco, G.; Almeida, W.; Costa, P.; Rocha, A. Cross-dataset emotion recognition from facial expressions through convolutional neural networks. J. Vis. Commun. Image Represent. 2022, 82, 103395. [Google Scholar] [CrossRef]
  7. Yan, K.; Zheng, W.; Zhang, T.; Zong, Y.; Cui, Z. Cross-database non-frontal facial expression recognition based on transductive deep transfer learning. arXiv 2018, arXiv:1811.12774. [Google Scholar]
  8. Jabbar, R.; Al-Khalifa, K.; Kharbeche, M.; Alhajyaseen, W.; Jafari, M.; Jiang, S. Real-time driver drowsiness detection for android application using deep neural networks techniques. Procedia Comput. Sci. 2018, 130, 400–407. [Google Scholar] [CrossRef]
  9. Hung, J.C.; Chang, J.W. Multi-level transfer learning for improving the performance of deep neural networks: Theory and practice from the tasks of facial emotion recognition and named entity recognition. Appl. Soft Comput. 2021, 109, 107491. [Google Scholar] [CrossRef]
  10. Alzubi, J.A.; Jain, R.; Alzubi, O.; Thareja, A.; Upadhyay, Y. Distracted driver detection using compressed energy efficient convolutional neural network. J. Intell. Fuzzy Syst. 2022, 42, 1253–1265. [Google Scholar] [CrossRef]
  11. Yang, H.; Zhang, Z.; Yin, L. Identity-adaptive facial expression recognition through expression regeneration using conditional generative adversarial networks. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi’an, China, 15–19 May 2018; IEEE: Piscataway Township, NJ, USA, 2018; pp. 294–301. [Google Scholar]
  12. Khanzada, A.; Bai, C.; Celepcikay, F.T. Facial expression recognition with deep learning. arXiv 2020, arXiv:2004.11823. [Google Scholar]
  13. Hossain, S.; Umer, S.; Asari, V.; Rout, R.K. A unified framework of deep learning-based facial expression recognition system for diversified applications. Appl. Sci. 2021, 11, 9174. [Google Scholar] [CrossRef]
  14. Macalisang, J.R.; Alon, A.S.; Jardiniano, M.F.; Evangelista, D.C.P.; Castro, J.C.; Tria, M.L. Drive-Awake: A YOLOv3 Machine Vision Inference Approach of Eyes Closure for Drowsy Driving Detection. In Proceedings of the 2021 IEEE International Conference on Artificial Intelligence in Engineering and Technology (IICAIET), Kota Kinabalu, Malaysia, 13–15 September 2021; IEEE: Piscataway Township, NJ, USA, 2021; pp. 1–5. [Google Scholar]
  15. Rescigno, M.; Spezialetti, M.; Rossi, S. Personalized models for facial emotion recognition through transfer learning. Multimed. Tools Appl. 2020, 79, 35811–35828. [Google Scholar] [CrossRef]
  16. Hou, M.; Wang, M.; Zhao, W.; Ni, Q.; Cai, Z.; Kong, X. A lightweight framework for abnormal driving behavior detection. Comput. Commun. 2022, 184, 128–136. [Google Scholar] [CrossRef]
  17. Shao, J.; Qian, Y. Three convolutional neural network models for facial expression recognition in the wild. Neurocomputing 2019, 355, 82–92. [Google Scholar] [CrossRef]
  18. Sini, J.; Marceddu, A.C.; Violante, M.; Dessì, R. Passengers’ emotions recognition to improve social acceptance of autonomous driving vehicles. In Progresses in Artificial Intelligence and Neural Systems; Springer: Singapore, 2021; pp. 25–32. [Google Scholar]
  19. Naqvi, R.A.; Arsalan, M.; Rehman, A.; Rehman, A.U.; Loh, W.K.; Paul, A. Deep learning-based drivers emotion classification system in time series data for remote applications. Remote Sens. 2020, 12, 587. [Google Scholar] [CrossRef] [Green Version]
  20. Paikrao, P.; Mukherjee, A.; Jain, D.K.; Chatterjee, P.; Alnumay, W. Smart emotion recognition framework: A secured IOVT perspective. In IEEE Consumer Electronics Magazine; IEEE: Piscataway Township, NJ, USA, 2021. [Google Scholar]
  21. Jeong, M.; Ko, B.C. Driver’s facial expression recognition in real-time for safe driving. Sensors 2018, 18, 4270. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Sukhavasi, S.B.; Sukhavasi, S.B.; Elleithy, K.; El-Sayed, A.; Elleithy, A. A hybrid model for driver emotion detection using feature fusion approach. Int. J. Environ. Res. Public Health 2022, 19, 3085. [Google Scholar] [CrossRef]
  23. Xiao, H.; Li, W.; Zeng, G.; Wu, Y.; Xue, J.; Zhang, J.; Li, C.; Guo, G. On-Road Driver Emotion Recognition Using Facial Expression. Appl. Sci. 2022, 12, 807. [Google Scholar] [CrossRef]
  24. Mehendale, N. Facial emotion recognition using convolutional neural networks (FERC). SN Appl. Sci. 2020, 2, 446. [Google Scholar] [CrossRef] [Green Version]
  25. Oh, G.; Ryu, J.; Jeong, E.; Yang, J.H.; Hwang, S.; Lee, S.; Lim, S. Drer: Deep learning–based driver’s real emotion recognizer. Sensors 2021, 21, 2166. [Google Scholar] [CrossRef]
  26. Li, W.; Zeng, G.; Zhang, J.; Xu, Y.; Xing, Y.; Zhou, R.; Guo, G.; Shen, Y.; Cao, D.; Wang, F.Y. CogEmoNet: A Cognitive-Feature-Augmented Driver Emotion Recognition Model for Smart Cockpit. IEEE Trans. Comput. Soc. Syst. 2021, 9, 667–678. [Google Scholar] [CrossRef]
  27. Ma, J.; Fan, X.; Yang, S.X.; Zhang, X.; Zhu, X. Contrast limited adaptive histogram equalization-based fusion in YIQ and HSI color spaces for underwater image enhancement. Int. J. Pattern Recognit. Artif. Intell. 2018, 32, 1854018. [Google Scholar] [CrossRef]
  28. Lo, W.W.; Yang, X.; Wang, Y. An xception convolutional neural network for malware classification with transfer learning. In Proceedings of the 2019 10th IFIP International Conference on New Technologies, Mobility and Security (NTMS), Canary Island, Spain, 24–26 June 2019; IEEE: Piscataway Township, NJ, USA, 2019; pp. 1–5. [Google Scholar]
  29. Dharmawan, W.; Nambo, H. End-to-End Xception model implementation on Carla Self Driving Car in moderate dense environment. In Proceedings of the 2019 2nd Artificial Intelligence and Cloud Computing Conference, Kobe, Japan, 21–23 December 2019; pp. 139–143. [Google Scholar]
  30. Zhang, L.; Wang, M.; Fu, Y.; Ding, Y. A Forest Fire Recognition Method Using UAV Images Based on Transfer Learning. Forests 2022, 13, 975. [Google Scholar] [CrossRef]
  31. Jeswal, S.K.; Chakraverty, S. Recent developments and applications in quantum neural network: A review. Arch. Comput. Methods Eng. 2019, 26, 793–807. [Google Scholar] [CrossRef]
  32. Tate, N.; Miyata, Y.; Sakai, S.I.; Nakamura, A.; Shimomura, S.; Nishimura, T.; Kozuka, J.; Ogura, Y.; Tanida, J. Quantitative analysis of nonlinear optical input/output of a quantum-dot network based on the echo state property. Opt. Express 2022, 30, 14669–14676. [Google Scholar] [CrossRef]
  33. Houssein, E.H.; Zaki, G.N.; Diab, A.A.Z.; Younis, E.M. An efficient Manta Ray Foraging Optimization algorithm for parameter extraction of three-diode photovoltaic model. Comput. Electr. Eng. 2021, 94, 107304. [Google Scholar] [CrossRef]
  34. Hu, G.; Li, M.; Wang, X.; Wei, G.; Chang, C.T. An enhanced manta ray foraging optimization algorithm for shape optimization of complex CCG-Ball curves. Knowl. Based Syst. 2022, 240, 108071. [Google Scholar] [CrossRef]
Figure 1. Overall block diagram of TLDFER-ADAS system.
Figure 1. Overall block diagram of TLDFER-ADAS system.
Sustainability 14 14308 g001
Figure 2. Structure of QDNN.
Figure 2. Structure of QDNN.
Sustainability 14 14308 g002
Figure 3. Sample images.
Figure 3. Sample images.
Sustainability 14 14308 g003
Figure 4. Confusion matrices of TLDFER-ADAS system under FER-2013 dataset: (a) entire database; (b) 70% of TR database; and (c) 30% of TS database.
Figure 4. Confusion matrices of TLDFER-ADAS system under FER-2013 dataset: (a) entire database; (b) 70% of TR database; and (c) 30% of TS database.
Sustainability 14 14308 g004
Figure 5. Result analysis of TLDFER-ADAS system under FER-2013 dataset.
Figure 5. Result analysis of TLDFER-ADAS system under FER-2013 dataset.
Sustainability 14 14308 g005
Figure 6. T R a c c and V L l o s s analysis of TLDFER-ADAS system under FER-2013 dataset.
Figure 6. T R a c c and V L l o s s analysis of TLDFER-ADAS system under FER-2013 dataset.
Sustainability 14 14308 g006
Figure 7. T R l o s s and V L l o s s analysis of TLDFER-ADAS system under FER-2013 dataset.
Figure 7. T R l o s s and V L l o s s analysis of TLDFER-ADAS system under FER-2013 dataset.
Sustainability 14 14308 g007
Figure 8. Confusion matrices of TLDFER-ADAS system under CK+ dataset: (a) entire database; (b) 70% of TR database; and (c) 30% of TS database.
Figure 8. Confusion matrices of TLDFER-ADAS system under CK+ dataset: (a) entire database; (b) 70% of TR database; and (c) 30% of TS database.
Sustainability 14 14308 g008
Figure 9. Average analysis of TLDFER-ADAS system under CK+ dataset.
Figure 9. Average analysis of TLDFER-ADAS system under CK+ dataset.
Sustainability 14 14308 g009
Figure 10. T R a c c and V L a c c analysis of TLDFER-ADAS system under CK+ dataset.
Figure 10. T R a c c and V L a c c analysis of TLDFER-ADAS system under CK+ dataset.
Sustainability 14 14308 g010
Figure 11. T R l o s s and V L l o s s analysis of TLDFER-ADAS system under CK+ dataset.
Figure 11. T R l o s s and V L l o s s analysis of TLDFER-ADAS system under CK+ dataset.
Sustainability 14 14308 g011
Figure 12. A c c u y analysis of TLDFER-ADAS approach under FER-2013 dataset.
Figure 12. A c c u y analysis of TLDFER-ADAS approach under FER-2013 dataset.
Sustainability 14 14308 g012
Figure 13. A c c u y analysis of TLDFER-ADAS approach under CK+ dataset.
Figure 13. A c c u y analysis of TLDFER-ADAS approach under CK+ dataset.
Sustainability 14 14308 g013
Table 1. Dataset details.
Table 1. Dataset details.
ClassFER-2013CK+ (Last Frame)
Angry459345
Disgust54759
Fear512125
Happy898969
Sad607728
Surprise400283
Neutral6198327
Total Number of Samples35,527636
Table 2. Result analysis of TLDFER-ADAS system with distinct classes under FER-2013 dataset.
Table 2. Result analysis of TLDFER-ADAS system with distinct classes under FER-2013 dataset.
FER-2013 Dataset
Class AccuracyPrecisionRecallF-ScoreAUC Score
Entire Dataset
Angry99.5298.5797.6998.1398.74
Disgust99.6284.8091.7788.1595.76
Fear99.2396.1098.6997.3899.01
Happy99.2998.7998.3998.5998.99
Sad98.9798.8495.1196.9497.44
Surprise99.2697.0396.4096.7298.01
Neutral99.1696.4698.7997.6199.01
Average99.2995.8096.6996.2298.14
Training Phase (70%)
Angry99.5298.3098.0398.1798.89
Disgust99.6184.1791.8887.8695.81
Fear99.2496.1898.6997.4299.02
Happy99.3198.9398.3598.6499.00
Sad98.9798.8095.1596.9497.46
Surprise99.2397.1695.9496.5597.79
Neutral99.1296.2798.7297.4898.96
Average99.2995.6996.6896.1598.13
Testing Phase (30%)
Angry99.5199.2496.8898.0498.38
Disgust99.6486.2991.5288.8295.64
Fear99.2195.9198.6997.2898.99
Happy99.2398.4798.4798.4798.98
Sad98.9798.9295.0396.9497.41
Surprise99.3396.7497.4597.0998.51
Neutral99.2496.8798.9597.9099.13
Average99.3196.0696.7196.3698.15
Table 3. Result analysis of TLDFER-ADAS system with distinct classes under CK+ dataset.
Table 3. Result analysis of TLDFER-ADAS system with distinct classes under CK+ dataset.
CK+ Dataset
Class AccuracyPrecisionRecallF-ScoreAUC Score
Entire Dataset
Angry 99.06 91.49 95.56 93.48 97.44
Disgust99.69100.0096.6198.2898.31
Fear99.5395.8392.0093.8895.92
Happy98.5896.8889.8693.2394.75
Sad99.3796.1589.2992.5994.56
Surprise99.6998.8098.8098.8099.31
Neutral98.4397.3199.6998.4998.39
Average99.1996.6494.5495.5396.95
Training Phase (70%)
Angry 99.33 92.59 96.15 94.34 97.84
Disgust99.78100.0097.7398.8598.86
Fear99.3393.3387.5090.3293.63
Happy99.1097.7893.6295.6596.68
Sad99.3395.2490.9193.0295.34
Surprise99.78100.0098.2899.1399.14
Neutral98.4397.4799.5798.5198.38
Average99.2996.6394.8295.6997.12
Testing Phase (30%)
Angry 98.43 90.00 94.74 92.31 96.79
Disgust99.48100.0093.3396.5596.67
Fear100.00100.00100.00100.00100.00
Happy97.3894.7481.8287.8090.61
Sad99.48100.0083.3390.9191.67
Surprise99.4896.15100.0098.0499.70
Neutral98.4396.94100.0098.4598.44
Average98.9596.8393.3294.8796.27
Table 4. Accuracy analysis of TLDFER-ADAS approach with other algorithms under two datasets.
Table 4. Accuracy analysis of TLDFER-ADAS approach with other algorithms under two datasets.
Accuracy (%)
MethodsFER-2013CK+ (Last Frame)
TLDFER-ADAS99.3199.29
DNN95.6596.03
Asm-SVM97.9194.29
PGC96.3995.81
FPD-NN97.0396.22
Improved FRCNN94.7894.29
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mustafa Hilal, A.; Elkamchouchi, D.H.; Alotaibi, S.S.; Maray, M.; Othman, M.; Abdelmageed, A.A.; Zamani, A.S.; Eldesouki, M.I. Manta Ray Foraging Optimization with Transfer Learning Driven Facial Emotion Recognition. Sustainability 2022, 14, 14308. https://doi.org/10.3390/su142114308

AMA Style

Mustafa Hilal A, Elkamchouchi DH, Alotaibi SS, Maray M, Othman M, Abdelmageed AA, Zamani AS, Eldesouki MI. Manta Ray Foraging Optimization with Transfer Learning Driven Facial Emotion Recognition. Sustainability. 2022; 14(21):14308. https://doi.org/10.3390/su142114308

Chicago/Turabian Style

Mustafa Hilal, Anwer, Dalia H. Elkamchouchi, Saud S. Alotaibi, Mohammed Maray, Mahmoud Othman, Amgad Atta Abdelmageed, Abu Sarwar Zamani, and Mohamed I. Eldesouki. 2022. "Manta Ray Foraging Optimization with Transfer Learning Driven Facial Emotion Recognition" Sustainability 14, no. 21: 14308. https://doi.org/10.3390/su142114308

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop