Next Article in Journal
Effects of Polymerization Time towards Conductivity and Properties of Poly(methyl methacrylate)/Polyaniline (PMMA/PANi) Copolymer
Next Article in Special Issue
Smart Waste Management and Classification Systems Using Cutting Edge Approach
Previous Article in Journal
Modeling to Factor Productivity of the United Kingdom Food Chain: Using a New Lifetime-Generated Family of Distributions
Previous Article in Special Issue
A Fog-Cluster Based Load-Balancing Technique
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Establishing an Intelligent Emotion Analysis System for Long-Term Care Application Based on LabVIEW

Department of Industrial Education and Technology, National Changhua University of Education, No. 1, Jin-De Rd., Changhua 500, Taiwan
*
Authors to whom correspondence should be addressed.
Sustainability 2022, 14(14), 8932; https://doi.org/10.3390/su14148932
Submission received: 29 May 2022 / Revised: 14 July 2022 / Accepted: 19 July 2022 / Published: 21 July 2022
(This article belongs to the Special Issue Sustainable Smart Cities and Societies Using Emerging Technologies)

Abstract

:
In this study, the authors implemented an intelligent long-term care system based on deep learning techniques, using an AI model that can be integrated with the Lab’s Virtual Instrumentation Engineering Workbench (LabVIEW) application for sentiment analysis. The input data collected is a database of numerous facial features and environmental variables that have been processed and analyzed; the output decisions are the corresponding controls for sentiment analysis and prediction. Convolutional neural network (CNN) is used to deal with the complex process of deep learning. After the convolutional layer simplifies the processing of the image matrix, the results are computed by the fully connected layer. Furthermore, the Multilayer Perceptron (MLP) model embedded in LabVIEW is constructed for numerical transformation, analysis, and predictive control; it predicts the corresponding control of emotional and environmental variables. Moreover, LabVIEW is used to design sensor components, data displays, and control interfaces. Remote sensing and control is achieved by using LabVIEW’s built-in web publishing tools.

1. Introduction

According to the United Nations World Population Prospects 2019 report, the total global population in 2020 will be close to 7.8 billion people, of which 730 million people are over 65 years old, accounting for about 9.3%. It is estimated that by 2050, the number of elderly people will reach 1.5 billion, doubling the growth rate from 2020, accounting for about 16%, showing that the aging of the global population is a common phenomenon [1]. In recent years, the world has faced the double attack of chronic diseases and emerging infectious diseases. Chronic diseases that often occur in the elderly consume a lot of medical resources, and the severe special infectious pneumonia (COVID-19) pandemic has a serious impact on human mental health, showing that chronic diseases and mental diseases have become common problems in modern society. The Industrial Technology Research Institute and Nan Shan Life jointly proposed a white paper on long-term care for the elderly in Taiwan in 2021, indicating that the elderly industry will combine artificial intelligence and machine learning in the future to transform traditional care into more high-quality and efficient smart care [2]. According to a survey conducted by Common Wealth Magazine in 2021, the average life expectancy of Taiwan’s rural residents is 7 years shorter than that of urban residents, and the lack of medical networks and long-term care is the primary reason. Therefore, it is one of the motives of this study to bridge the gap [3]. The development of a remote care system has also become an important issue in long-term care for the elderly [4].
The goal of this research is to construct an AI emotion recognition model and a long-term care medical system with remote monitoring functions. The physical and mental health of the elderly is extremely valued. While gradually losing physical strength, the elderly must maintain a positive mental state to reduce the impact of aging. The mental state can be judged by observing the face of the elderly [5,6,7]. In [8], the authors proposed to test the accuracy of successfully predicting seven facial micro-expressions, i.e., happy, sad, angry, scared, surprised, and deceitful-using facial expressions for the real-time temptation and aversion recognition dataset (FER-2013), which is the 2013 Facial Expression Recognition Dataset (FER-2013) provided by Kaggle and presented at the In-2013 International Conference on Machine Learning (ICML) [9]. In the literature [10], seven categorical feature points of emotional microemotic expressions are mentioned. Through real-time analysis of the facial emotions of the elders, the mental state of elderly can be judged and learned, such as anger, depression, and sadness [11]. These concepts can feedback on medical care to assist long-term care and medical care staff in decision-making for helping [12].
In modern society, the advancement of AI allows humans to save more time and resources and make life more convenient. For example, in [13], a lifelike virtual speaker is built through deep learning technology, which, in addition to a beautiful static appearance, uses AI technology to simulate mouth movements, facial expressions, and body movements to truly synchronize with sound. Deep learning techniques have been very successful in various fields, even in the medical field [14,15,16,17,18,19]. Due to the aging population, the issue of long-term care for the elderly, which has gradually attracted attention, has also attracted the attention of researchers. It is hoped that emerging technologies can solve the existing difficulties of long-term care for the elderly. In the literature [20], the application of convolutional neural networks to Alzheimer’s disease (AD) for common diseases in the elderly is explored in diabetes and nursing. Among them, the analysis results of the CNN model are better than those of the full convolutional neural network (FCNN) and the partition effect of the support vector machine (SVM) algorithm.
This study proposes to develop an AI model for facial recognition, so that the care system can possess the ability of facial emotion recognition, which can be applied to the long-term care needs of the elderly. This developed system uses LabVIEW to collect remote information, such as monitoring images, temperature, humidity, etc. In the program design, Python was used to achieve the artificial neural network of CNN, and a facial recognition AI model was built inside LabVIEW to predict the emotions of the care recipients, and then LabVIEW gave the appropriate corresponding care needs according to the judgment of the AI model.
The use of LabVIEW software enables functions such as data acquisition, storage, retrieval, display and control [21], and accomplish wireless transmission and real-time image monitoring [22]. The intelligent model is used to judge the facial emotional changes of the care recipients in the long-term care environment, and then output control signals. LabVIEW uses built-in signal output modules and network publishing tools to build a remote sensing and control function, which enables the system to monitor the long-term care environment and each sensor transmit the sensing data through a wireless connection during the monitoring, also enables the system to use internet access devices in different locations for real-time monitoring and control.
In addition, in the system deployment of the control side, the output control signal can be used to activate different relays to provide different care needs such as alarm, air conditioning, lighting, sound and sending electronic signals [23]. This research integrates LabVIEW and IoT technology and achieves the construction of an intelligent home-based long-term care system.
The facial emotion recognition system has a wide range of applications in the fields of smart home and medical care. Functionally, it needs to be able to accurately recognize the face or emotion to reflect the correct judgment output [8,23,24,25,26,27,28,29,30,31,32,33,34], but in the above articles, there are no systems built by integrating LabVIEW and Python. In [35,36,37,38,39,40,41,42], different recognition applications and computing methods are mentioned, such as [42] proposes CNN architecture to segregate different plant images from the sequences collected. In this research, convolutional cellular neural networks (CNN) is utilized to achieve identification and judgment ability in the system, this technology is one of the more mature technologies in intelligent image detection [43].

2. Preliminaries

CNNs have very important applications in different fields [43]. The CNN method is an extension of the multilayer perceptron (MLP) method for two-dimensional processing of data, with the most common applications being image processing and feature recognition [44]. The difference between CNNs and MLPs is that each neuron in CNNs exists in two dimensions, while each neuron in MLPs has only one dimension. The most beneficial aspect of CNNs is reducing the number of parameters in ANNs. This achievement has prompted both researchers and developers to approach larger models in order to solve complex tasks, which was not possible with classic ANNs. The most important assumption about problems that are solved by CNNs should not have features which are spatially dependent. In other words, for example, in a face detection application, we do not need to pay attention to where the faces are located in the images [45]. Therefore, this research builds an AI model for real-time image sentiment analysis based on the CNN model. The structure diagram is shown in Figure 1.
CNNs’ convolutional neural network consists of three layers, namely convolution operation (CONV layer), pooling operation (Pooling layer) and fully connected layer.
A convolutional layer is a layer that contains an entire set of filters, each of which is convolved with the input image. At the beginning of the operation, all the convolution kernels are randomly initialized. Then, the neural network will calculate the neuron output according to the coefficients of the convolution kernel and connect the calculated neuron weights to the input of the next convolution layer.
RELU (Rectified Linear Unit, linear rectification function) is a kind of activation function in a neural network. The main purpose of the activation function is to increase the nonlinearity of the neural-like network model, so that the defined neural-like network can be more active and learn, and avoid being rigid like a linear function.
Pooling layers are used to reduce the dimensions of the feature maps. Thus, it reduces the number of parameters to learn and the amount of computation performed in the network. The pooling layer summarizes the features present in a region of the feature map generated by a convolution layer [46].
CNN shares the best features that possess local interconnection characteristics making it to be easily realizable for the very large-scale integration (VLSI) implementation either as planar or as multilayer structures and real-time continuous and high-speed parallel signal processing feature. By ordering the cells, in the case where cells are linear interactions and the input of each cell is constant, the state equation of CNN is described by the first-order nonlinear differential system shown in Equation (1) [47].
x ˙ ( t ) = R x ( t ) + A y ( t ) + u

3. System Structure

LabVIEW is the core main programming software of the graphical user interface (GUI). In addition to providing human-machine interface presentation and data reception, it can also develop the system on the web page or develop application programs in the form of API, remote monitoring, CAM image acquisition, and corresponding controls. Python is used to build a deep learning AI model embedded in LabVIEW. This research is developed on the basis of a common and mature CNN model. However, the supervised learning method of manual labels is time-consuming and prone to overfitting during the research process. Therefore, in the future, the unsupervised Generative Adversarial Network (GAN) will be imported to conduct research on the semi-supervised method of confrontation generation with this model. Figure 2 shows the system structure of the data, sentiment analysis and data processing of the intelligent long-term lighting environment. In addition, LabVIEW stores the newly collected image files in a new database for AI model calculation on the server. Currently, it will also be set to the cloud storage method. In the future, it can directly perform cloud edge computing to speed up data processing and transmission and reduce delays. On the interface, NI DAQ Card will be used to achieve practical sensing and control capabilities. In addition, new images captured in real-time while the system is running will be the source of data for continuous training and validation of the model.
Once the system is completed, the function of micro-expression emotion analysis of the system module can be integrated into the long-term care system, such as the architecture of the smart long-term care monitoring system shown in Figure 3, which shows the corresponding control deployment in the right side, such as notifying the caregiver, adjusting the environment, or transmitting warning signals, etc.
In the emotion recognition system, after the image captures the facial features, the points of difference are distinguished from them. For example, eyebrow contour, eye contour, pupil, nostrils, mouth contour, and mouth center are organized into seven kinds of micro-expression categories, which have distinct classification criteria. The classification criteria are:
  • happy: the cheek muscles rise, the corners of the mouth rise, the corners of the mouth pull back, the eyebrows are flat, and the eyes become smaller;
  • sadness: upper eyelid drooping, dilated pupils, corners of the mouth pulling down, cheeks pulling down, eyebrows locked deeply;
  • anger: enlarged nostrils, enlarged eyes;
  • disgust: raise your nose and raise your mouth;
  • fear: the middle of the eyebrows is crowded together;
  • surprise: the mouth is slightly open, the pupils are dilated, and the eyebrows are raised;
  • contempt: The corners of the lips tighten and lift only one side of the face, and one eyebrow rises;
From Figure 4, the same emotion under the feature points and age are not much different, and after the image is grayscale processed, the skin color will not have a big impact, and the AI model can be trained to identify according to the feature points of these emotions.

4. Main Results

In this study, LabVIEW is used to construct a monitoring-control interface for the AI model that integrates facial feature recognition and emotion analysis. The monitoring and control interface is shown in Figure 5. The human-machine interface design includes (A) environment monitoring block, (B) user login block, (C) block of emotion prediction results, (D) block of instant emotion analysis of facial expressions, and (E) AI model scheduling control area. In addition, the established model archives will be scheduled in the system with the LabVIEW tool library, Python nodes, which are used to connect the trained and tested AI models to predict the actual control needs in the long-term care environment. Using this method of construction, we can build AI models into neural networks that can predict emotions and perform intelligent control based by LabVIEW. In addition, LabVIEW features can help create friendly GUIs that facilitate neural network operation, analysis, and monitoring.
The proposed system is designed to be able to integrate various AI models in future developments. In addition to the CNN model trained in this paper, other deep learning models are used for future updates or applications. These AI models can be selected in the human-machine Interface block E. In this study, the selected AI model is a CNN with a predictive control MLP framework. Block E can show the predicted quantization scores, reset data, stop detection, and select AI models. Figure 6 shows the situation when different AI models are selected.

4.1. Environmental Monitoring

In the environmental monitoring section, the HMI displays deployed sensor data and detected real-time environmental condition data. The data displayed is real-time data captured by the sensor, and converted to standard units by LabVIEW calculations, including the current number of people, current temperature, humidity, and brightness. Figure 7 shows the program design diagram of environmental monitoring. The functional design is set to monitor environmental conditions to further predict and analyze the control of the environment. Since the environment may be one of the factors that affect the target person’s emotions, the module collects environmental data: crowd density, temperature, light, etc., limited by the current lack of environmental data and emotional data. In the future, through the collection of these data and this research, the analysis of sentiment analysis can further explore correlations and can be planned to be added as control factors to make the system more complete.

4.2. Sign-in Function

In addition to protecting the system from illegal people and protecting privacy, the system records who logged in and when to a file for future research and tracking. Authentication is deployed before the operating system, as shown in block B of Figure 5. At this stage, it is possible to prevent the elderly from accidentally touching, but for many inconvenient factors such as operations, after the model verification is completed, it is integrated with face recognition as a safety mechanism. If the login process fails, you cannot enter the monitoring system. Figure 8 depicts an image of a successful system login.

4.3. Sentiment Prediction

In the AI model design, Python is used to construct the required AI modules, train and test them first, and then verify the feasibility of the model. The part using the programming approach is shown in Figure 9. The AI model (Python CNN architecture) is embedded in the system as an API using the LabVIEW Python node. The system receives the detected images instantly on the human-machine interface, and the trained model is used to verify the facial expressions and perform predictive analysis at the back-end of the system. In addition, the detected image data is further transferred back to the new training set for subsequent model validation. The obtained expressions are quantified by LabVIEW for the prediction scores of the sentiment, as shown in Figure 10. This section corresponds to A in Block Figure 5, and the results of these operations will be used for various control decisions in the long-term care environment.

4.4. Facial Expression Image Monitoring

Figure 11 shows the predicted value of emotion for real-time face recognition or tells the system if the face image is detected by CAM. The predicted emotion will be analyzed to further control the environment or to alert and inform the medical staff for the next strategy. Due to the current difficulties in obtaining national laws and personal data, the strategy section is still inconclusive, and if the identity of the person can be detected in the future, the medical staff can tailor it to the target. Figure 12 shows part of the procedure for real-time facial expression monitoring and emotional recognition analysis.

4.5. Model Construction and Methods

In this study, we used deep learning techniques to predict emotional changes in facial expressions. Deep learning is a branch of machine learning, a type of machine learning that is repeated in training and testing to simulate deeper and more complex human thinking such as neural network. In this research, CNN convolutional neural network structure is used shown in Figure 1. The captured facial expression pictures will be performed neural network-like operations to achieve image feature extraction, analysis operations, and prediction different facial expressions. Figure 13 shows the AI model building steps of this research. The steps are as follows:
Step 1: Collect the images to be trained, convert the images into numerical expression and label them as training data in the database. In this study, the data classification of the FER-2013 dataset was referred to, and the dataset was labeled according to seven different classifications of happy, anger, sadness (pain), fear, disgust, surprise, and contempt, and the number of data in the dataset is shown in Table 1.
Step 2: Use Python to build the convolution layer and pooling layer of the neural network, which includes 4 times of non-linear convolution operation of the convolution layer and 4 times of pooling process as shown in Figure 14, next, connect the fully connected layer at the back end, and output the classification result such as shown in Figure 15.
Step 3: Input the training data processed in Step 1 into the CNN-like neural network built in Step 2 for training.
Step 4: Analyze model loss and accuracy through training and test sets.
Step 5: After confirming that the model loss and accuracy have the recognition ability, the developed AI model will begin to perform real-time actual image recognition work.
Figure 15 is the architectural design diagram of the CNN AI model structure, and Figure 16 is the practical appearance diagram of the designed system that utilizes an NI-DAQmx to output control signals and control from a remote. The AI model begins by filtering, labeling, and processing selected images fetched from the CAM. Figure 17 is the numerical data and labels after image processing. Figure 18 shows the image test set before processing. These image label data will be provided to the AI model for learning and training. After repeated operations of four layers, in Figure 19, the detailed features of the pixels will be extracted, and finally used as the input of the fully connected layer and possessing the power to classify after the activation function with the fully connected layer, in Figure 20. Figure 18 shows the image test set before processing.
During the training process, Figure 19 shows the design of the AI model, which repeatedly performs convolution and pooling operations. Since the features after the convolution operation often cause wrong numerical image data transfer, the pooling operation provides another form of translation invariance, which continuously reduces the spatial size of the data, as shown in block A. The input image enters the first layer of convolution and pooling operation; the number of data and the amount of calculation will also decrease. This part can be seen in the list of output shapes in Figure 20. The image data decreases from 48 × 48 × 32. This process will also reduce the probability of over-fitting because the pooling layer will remove inappropriate numerical data conversion operations on the image caused by the AI model in the convolutional layer. This process can eliminate skew, stains, and deformations.
Such as Figure 19 four-block red box, after the 4-layer convolution and pooling process designed in this study, the author refers to the FER-2013 database to convert the collected full-color images into a first-order matrix image, black and white is more friendly to the model. After the generalized convolution operation, such as Formula (2), the function x and y are measurable function defined on R n . The convolution of x and y is denoted as x y . It is the integral of the product of one of the functions after inversion and translation and the product of the other function, which is a pair A function of the amount of translation, the number of features obtained after the cumulative operation are displayed in the param column. The next stage, the flatten layer, will expand the feature data output from the convolution and pooling layer and performing dimension conversion for input data to the fully connected layer, and finally the AI model has the ability to predict and classify.
( x y ) ( t )         = d e f   R n   x ( τ ) y ( t τ ) d τ
The function of the fully connected layer is to input the features output from the convolutional layer and the pooling layer into this layer and adjust the weights and biases to obtain accurate classification prediction results. As shown in Figure 20, the fully connected layer of this study is a three-layer 256 × 128 × 64 structure, and the final output is divided into 7 expression classifications (see Figure 15). The dropout layer in the fully connected layer acts to prevent overfitting from occurring during the classification process. During training, the established model will be continuously corrected and reduced the loss trend. The loss of the model can be seen in Figure 21. The loss curve of the model shows a downward trend, which also means that the prediction accuracy can be increased through continuous training. Lastly, the model’s accuracy rate is obtained from the following Formula (3). The accuracy definition is derived from the confusion matrix for a given test data set (see Table 1 for details) and the ratio of the number of samples correctly classified by the classification model to the total number of samples (Table 2). This model can reach a high accuracy rate of 87%.
Accuracy = TN + TP TP + TN + FP + FN
TP: correctly identified as positive samples.
TN: correctly identified as negative samples.
FP: falsely identified as positive.
FN: Misidentified as negative sample.
Figure 21. Model training loss curve.
Figure 21. Model training loss curve.
Sustainability 14 08932 g021
Table 2. the model prediction results are confused with the positive and negative samples of the data set.
Table 2. the model prediction results are confused with the positive and negative samples of the data set.
Actual (Positive)Actual (Negative)
Predict (Positive)TPFP
Predict (Negative)FNTN
After the CNN deep learning AI model is trained and integrated with LabVIEW, the detected facial expressions and the judged emotions can be displayed on the human-machine interface. Figure 22 and Figure 23 show the real-time detection of facial emotion by the system. Figure 22 shows the prediction graph of sad expressions, Figure 23 shows the prediction graph of happy expressions, and Figure 24 shows the part of programs used for image processing and sentiment analysis.
This image recognition machine learning system integrates facial emotion recognition and the need of long-term care IoT system to complete an intelligent emotion analysis system of long-term care. The built CNN AI Model can analyze the detected facial expression, and to judge and give the corresponding control to achieve intelligent long-term care operation system, this system can share the work responsibilities and pressures of medical staff and nurses, and can also grasp the psychological feelings of the care recipients in real time, and improve the quality of overall care.

5. Conclusions

This study uses AI technology combined with facial image recognition and environmental monitoring to alleviate the problem of medical shortages and the inability of these caregivers to provide comprehensive care in remote rural areas. In this research, LabVIEW is used to construct a smart long-term care system. The software also integrates Python to design a face recognition AI module of convolutional neural network. Furthermore, this system can also transmit remote data by means of IoT. The established identification database can continuously add training data through the operation of the system, so that the success rate of identification can be improved upwards. The hardware part includes a camera, sensors, and a data capture card for interface use. The integration of software and hardware systems can achieve construction planning of smart long-term care. In practical, this design and construction method will allow this system to add different sensing and control devices and equipment according to long-term care needs. Based on this method, different intelligent long-term care system can be fulfilled.
In similar facial data processing, emotion recognition is subjective information that is difficult to monitor and lacks transparency. It is difficult to draw conclusions when errors or perceptions are doubtful. Therefore, the application proposed in this study is not absolutely correct, but only informative enough to provide warnings and suggestions. In the future, the combination of other detected or measured physiological parameters such as heart rate, oxygen saturation, and body temperature can be used as the next step for more accurate medical applications, and the results can be used to validate the aforementioned facial recognition system to enhance the accuracy of the system.
In this study, the face-to-expression prediction constructed using the LabVIEW platform can work normally in both the hardware and software of the system, but there are still some problems with the test results, including:
  • There may be more appropriate hyperparameter configurations such as convolutional and fully connected layers, or better depth models may be used to obtain better accuracy.
  • The amplification of the data volume of the data set, the amount of data in some categories is not sufficient, resulting in the low accuracy of the identification of the category.
  • Due to national laws and treaty restrictions, more personal identity and health information cannot be added to the research materials, and cannot be disclosed, and the conclusion of the research is easy to be questioned.
  • The device can use cams with higher resolution and autofocus functions to improve the efficiency of detection and identification, and in the future, it can even obtain more information according to portable electronic devices to achieve a smarter system.

Author Contributions

All authors contributed meaningfully to this study. Research topic, K.-C.Y., W.-T.H., T.-Y.C., C.-C.W. and W.-S.H.; methodology, K.-C.Y. and W.-T.H.; software, T.-Y.C. and C.-C.W.; validation, K.-C.Y., W.-T.H., T.-Y.C., C.-C.W. and W.-S.H.; formal analysis, T.-Y.C. and C.-C.W.; investigation, T.-Y.C. and C.-C.W.; resources, K.-C.Y., W.-T.H. and W.-S.H.; data curation, K.-C.Y., W.-T.H. and W.-S.H.; writing—original draft preparation, K.-C.Y., W.-T.H., T.-Y.C., C.-C.W. and W.-S.H.; writing—review and editing, K.-C.Y., W.-T.H., T.-Y.C., C.-C.W. and W.-S.H.; visualization, K.-C.Y., W.-T.H. and W.-S.H.; supervision, K.-C.Y., W.-T.H. and W.-S.H.; project administration, K.-C.Y., W.-T.H. and W.-S.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by the Ministry of Science and Technology, Taiwan under the Grant No. MOST 109-2511-H-018-018-MY3.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki. The experimental protocol was approved by the National Changhua Normal University Institutional Review Board (IRB).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

This study is grateful for the technical support of the Virtual Instrument Control Center and the Smart Grid Technology and Application Laboratory of National Changhua Normal University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ministry of Economic Affairs. 2021/2022 Industrial Technology White Paper; Taiwan Institute of Economic Res: Taipei City, Taiwan, 2021; Available online: https://reurl.cc/NAxmGm (accessed on 1 April 2022).
  2. Industrial Technology Research Institute & Nan Shan Life Insurance Company. Taiwan’s Long-Term Care Industry White Paper; Institute of Industrial Technology International Strategy Development Institute of Industrial Technology: Taipei City, Taiwan, 2021; Available online: https://reurl.cc/55WN17 (accessed on 22 June 2022).
  3. CommonWealth Magazine. The Average Life Expectancy of Taiwanese Rural Residents Is 7 Years Shorter than That of Urban Residents! How Can 5G Bridge This Gap? Taipei City, Taiwan. 2021. Available online: https://reurl.cc/6ZORvd (accessed on 22 June 2022).
  4. Lepage, P.; Létourneau, D.; Hamel, M.; Briere, S.; Corriveau, H.; Tousignant, M.; Michaud, F. Telehomecare telecommunication framework—From remote patient monitoring to video visits and robot telepresence. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 3269–3272. [Google Scholar] [CrossRef]
  5. Wada, K.; Shibata, T.; Saito, T.; Tanie, K. Psychological and social effects of robot assisted activity to elderly people who stay at a health service facility for the aged. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), Taipei, Taiwan, 14–19 September 2003; Volume 3, pp. 3996–4001. [Google Scholar] [CrossRef]
  6. Baños, R.M.; Etchemendy, E.; Castilla, D.; García-Palacios, A.; Quero, S.; Botella, C. Positive mood induction procedures for virtual environments designed for elderly people. Interact. Comput. 2012, 24, 131–138. [Google Scholar] [CrossRef]
  7. Gallego-Perez, J.; Lohse, M.; Evers, V. Robots to motivate elderly people: Present and future challenges. In Proceedings of the 2013 IEEE RO-MAN, Gyeongju, Korea, 26–29 August 2013; pp. 685–690. [Google Scholar] [CrossRef]
  8. Zahara, L.; Musa, P.; Wibowo, E.P.; Karim, I.; Musa, S.B. The Facial Emotion Recognition (FER-2013) Dataset for Prediction System of Micro-Expressions Face Using the Convolutional Neural Network (CNN) Algorithm based Raspberry Pi. In Proceedings of the 2020 Fifth International Conference on Informatics and Computing (ICIC), Gorontalo, Indonesia, 3–4 November 2020; pp. 1–9. [Google Scholar] [CrossRef]
  9. Goodfellow, I.J.; Erhan, D.; Carrier, P.L.; Courville, A.; Mirza, M.; Hamner, B.; Cukierski, W.; Tang, Y.; Thaler, D.; Lee, D.H.; et al. Challenges in representation learning: A report on three machine learning contests. In Proceedings of the International Conference on Neural Information Processing, Daegu, Korea, 3–7 November 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 117–124. [Google Scholar] [CrossRef] [Green Version]
  10. Ecman, P. Micro Expressions. 2017. Available online: https://www.slideserve.com/tyrell/microexpressions (accessed on 22 June 2022).
  11. Lee, P.L.; Wang, C.L. Emotional Management for the Elderly. J. Crisis Manag. 2012, 9, 95–104. [Google Scholar] [CrossRef]
  12. Huang, E.W.; Chiou, S.F.; Pan, M.L.; Wu, H.H.; Jiang, J.R.; Lu, Y.D. The Development of an Intelligent Long-Term Care Services System That Integrates Innovative Information and Communication Technologies. J. Nurs. Res. 2017, 64, 10–18. [Google Scholar] [CrossRef]
  13. Wang, H.; Sharma, A.; Shabaz, M. Research on digital media animation control technology based on recurrent neural network using speech technology. Int. J. Syst. Assur. Eng. Manag. 2022, 13, 564–575. [Google Scholar] [CrossRef]
  14. Islam, J.; Zhang, Y. Brain MRI analysis for Alzheimer’s disease diagnosis using an ensemble system of deep convolutional neural networks. Brain Inform. 2018, 5, 2. [Google Scholar] [CrossRef] [PubMed]
  15. Huang, Y.; Xu, J.; Zhou, Y.; Tong, T.; Zhuang, X. Alzheimer’s Disease Neuroimaging Initiative (ADNI) (2019) Diagnosis of Alzheimer’s disease via multi-modality 3D convolutional neural network. Front. Neurosci. 2019, 13, 509. [Google Scholar] [CrossRef] [PubMed]
  16. Wen, J.; Thibeau-Sutre, E.; Diaz-Melo, M.; Samper-González, J.; Routier, A.; Bottani, S.; Dormont, D.; Durrleman, S.; Burgos, N.; Colliot, O. Initiative ADN (2020) Convolutional neural networks for classification of Alzheimer’s disease: Overview and reproducible evaluation. Med. Image Anal. 2020, 63, 101694. [Google Scholar] [CrossRef]
  17. Duc, N.T.; Ryu, S.; Qureshi, M.N.I.; Choi, M.; Lee, K.H.; Lee, B. 3D-deep learning based automatic diagnosis of Alzheimer’s disease with joint MMSE prediction using resting-state fMRI. Neuroinformatics 2020, 18, 71–86. [Google Scholar] [CrossRef]
  18. Poongodi, M.; Sharma, A.; Hamdi, M.; Maode, M.; Chilamkurti, N. Smart healthcare in smart cities: Wireless patient monitoring system using IoT. J. Supercomput. 2021, 77, 12230–12255. [Google Scholar] [CrossRef]
  19. Sodhi, G.K.; Kaur, S.; Gaba, G.S.; Kansal, L.; Sharma, A.; Dhiman, G. COVID-19: Role of robotics, artificial intelligence, and machine learning during pandemic. Curr. Med. Imaging 2021, 18, 124–134. [Google Scholar] [CrossRef]
  20. Chen, X.; Li, L.; Sharma, A.; Dhiman, G.; Vimal, S. The Application of Convolutional Neural Network Model in Diagnosis and Nursing of MR Imaging in Alzheimer’s Disease. Sci. Comput. Life Sci. 2022, 14, 34–44. [Google Scholar] [CrossRef] [PubMed]
  21. Murugan, M.S.; Srikanth, L.; Naidu, V.P.S. Design and development of LabVIEW based environmental test chamber controller. In Proceedings of the 2017 International Conference on Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT), Mysuru, India, 15–16 December 2017; pp. 1–4. [Google Scholar] [CrossRef]
  22. Wati, D.A.R.; Abadianto, D. Design of face detection and recognition system for smart home security application. In Proceedings of the 2017 2nd International conferences on Information Technology, Information Systems and Electrical Engineering (ICITISEE), Yogyakarta, Indonesia, 1–2 November 2017; pp. 342–347. [Google Scholar] [CrossRef]
  23. Yao, K.-C.; Huang, W.-T.; Wu, C.-C.; Chen, T.-Y. Establishing an AI Model on Data Sensing and Prediction for Smart Home Environment Control Based on LabVIEW. Mathematical Problems in Engineering. Math. Probl. Eng. 2021, 2021, 7572818. [Google Scholar] [CrossRef]
  24. Jeon, T.; Bae, H.B.; Lee, Y.; Jang, S.; Lee, S. Deep-Learning-Based Stress Recognition with Spatial-Temporal Facial Information. Sensors 2021, 21, 7498. [Google Scholar] [CrossRef] [PubMed]
  25. Cai, W.; Gao, M.; Liu, R.; Mao, J. MIFAD-net: Multi-layer interactive feature fusion network with angular distance loss for face emotion recognition. Front. Psychol. 2021, 12, 4707. [Google Scholar] [CrossRef]
  26. Maithri, M.; Raghavendra, U.; Gudigar, A.; Samanth, J.; Barua, P.D.; Murugappan, M.; Chakole, Y.; Acharya, U.R. Automated Emotion Recognition: Current Trends and Future Perspectives. Comput. Methods Programs Biomed. 2022, 215, 106646. [Google Scholar] [CrossRef]
  27. Kollias, D.; Zafeiriou, S.P. Exploiting multi-cnn features in cnn-rnn based dimensional emotion recognition on the omg in-the-wild dataset. IEEE Trans. Affect. Comput. 2021, 12, 595–606. [Google Scholar] [CrossRef]
  28. Gill, R.; Singh, J. A Deep Learning Approach for Real Time Facial Emotion Recognition. In Proceedings of the 2021 10th International Conference on System Modeling & Advancement in Research Trends (SMART), Moradabad, India, 10–11 December 2021; pp. 497–501. [Google Scholar] [CrossRef]
  29. Ma, X.-X.; Zhang, X.-X.; Guo, L.-X.; Ding, Z.-W.; Zhang, L.-L.; Wei, S.-Y.; Fan, R.; Ma, Y.-Z. An intelligent old-age home endowment monitoring system based on Internet of Things. In Proceedings of the 2017 International Conference on Progress in Informatics and Computing (PIC), Nanjing, China, 15–17 December 2017; pp. 337–340. [Google Scholar] [CrossRef]
  30. Dhuheir, M.; Albaseer, A.; Baccour, E.; Erbad, A.; Abdallah, M.; Hamdi, M. Emotion recognition for healthcare surveillance systems using neural networks: A survey. In Proceedings of the 2021 International Wireless Communications and Mobile Computing (IWCMC), Harbin City, China, 28 June–2 July 2021; pp. 681–687. [Google Scholar] [CrossRef]
  31. Chen, H.; Zhao, Y.; Zhao, T.; Chen, J.; Li, S.; Zhao, S. Emotion Recognition of the Elderly Living Alone Based on Deep Learning. In Proceedings of the 2021 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW), Penghu, Taiwan, 15–17 September 2021; pp. 1–2. [Google Scholar] [CrossRef]
  32. Han, J.; Zhang, Z.; Ren, Z.; Schuller, B. EmoBed: Strengthening monomodal emotion recognition via training with crossmodal emotion embeddings. Proc. IEEE Trans. Affect. Comput. 2019, 12, 553–564. [Google Scholar] [CrossRef] [Green Version]
  33. Vithanawasam, T.M.W.; Madhusanka, B.G.D.A. Face and upper-body emotion recognition using service robot’s eyes in a domestic environment. In Proceedings of the 2019 International Research Conference on Smart Computing and Systems Engineering (SCSE), Colombo, Sri Lanka, 28 March 2019; pp. 44–50. [Google Scholar] [CrossRef]
  34. Zhang, T.; Liu, M.; Yuan, T.; Al-Nabhan, N. Emotion-aware and intelligent internet of medical things toward emotion recognition during COVID-19 pandemic. IEEE Internet Things J. 2020, 8, 16002–16013. [Google Scholar] [CrossRef]
  35. Cowie, R.; Douglas-Cowie, E.; Tsapatsoulis, N.; Votsis, G.; Kollias, S.; Fellenz, W.; Taylor, J.G. Emotion recognition in human-computer interaction. IEEE Signal Process. Mag. 2001, 18, 32–80. [Google Scholar] [CrossRef]
  36. Huang, X.; Zhao, G.; Hong, X.; Zheng, W.; Pietikäinen, M. Spontaneous facial micro-expression analysis using spatiotemporal completed local quantized patterns. Neurocomputing 2016, 175, 564–578. [Google Scholar] [CrossRef]
  37. Huang, X.; Wang, S.J.; Zhao, G.; Piteikainen, M. Facial micro-expression recognition using spatiotemporal local binary pattern with integral projection. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Santiago, Chile, 7–13 December 2015; pp. 1–9. [Google Scholar] [CrossRef]
  38. Huang, X.; Wang, S.J.; Liu, X.; Zhao, G.; Feng, X.; Pietikäinen, M. Discriminative spatiotemporal local binary pattern with revisited integral projection for spontaneous facial micro-expression recognition. IEEE Trans. Affect. Comput. 2017, 10, 32–47. [Google Scholar] [CrossRef] [Green Version]
  39. Liu, Y.J.; Zhang, J.K.; Yan, W.J.; Wang, S.J.; Zhao, G.; Fu, X. A main directional mean optical flow feature for spontaneous micro-expression recognition. IEEE Trans. Affect. Comput. 2015, 7, 299–310. [Google Scholar] [CrossRef]
  40. Yan, J.; Zheng, W.; Xin, M.; Yan, J. Integrating facial expression and body gesture in videos for emotion recognition. IEICE Trans. Inf. Syst. 2014, 97, 610–613. [Google Scholar] [CrossRef] [Green Version]
  41. Bao, H.; Ma, T. Feature extraction and facial expression recognition based on bezier curve. In Proceedings of the 2014 IEEE International Conference on Computer and Information Technology, Xi’an, China, 11–13 September 2014; pp. 884–887. [Google Scholar] [CrossRef]
  42. Kavitha, D.; Hebbar, R.; Vinod, P.V.; Harsheetha, M.P.; Jyothi, L.; Madhu, S.H. CNN based technique for systematic classification of field photographs. In Proceedings of the 2018 International Conference on Design Innovations for 3Cs Compute Communicate Control (ICDI3C), Bangalore, India, 25–28 April 2018; pp. 59–63. [Google Scholar] [CrossRef]
  43. Chua, L.O.; Yang, L. Cellular neural networks: Theory. IEEE Trans. Circuits Syst. 1988, 35, 1257–1272. [Google Scholar] [CrossRef]
  44. Springenberg, J.T.; Dosovitskiy, A.; Brox, T.; Riedmiller, M. Striving for simplicity: The all convolutional net. In Proceedings of the 3rd International Conference on Learning Representations ICLR 2015—Workshop Track Proceedings, San Diego, CA, USA, 7–9 May 2015. [Google Scholar] [CrossRef]
  45. Albawi, S.; Mohammed, T.A.; Al-Zawi, S. Understanding of a convolutional neural network. In Proceedings of the 2017 International Conference on Engineering and Technology (ICET), Antalya, Turkey, 21–23 August 2017; pp. 1–6. [Google Scholar] [CrossRef]
  46. Wang, L.; Liu, W.; Shi, H.; Zurada, J.M. Cellular neural networks with transient chaos. IEEE Trans. Circuits Syst. II Express Briefs 2007, 54, 440–444. [Google Scholar] [CrossRef]
  47. Li, X.; Ma, C.; Huang, L. Invariance principle and complete stability for cellular neural networks. IEEE Trans. Circuits Syst. II Express Briefs 2006, 53, 202–206. [Google Scholar] [CrossRef]
Figure 1. Structure diagram of CNN convolutional neural network.
Figure 1. Structure diagram of CNN convolutional neural network.
Sustainability 14 08932 g001
Figure 2. The system architecture of Smart long-term care.
Figure 2. The system architecture of Smart long-term care.
Sustainability 14 08932 g002
Figure 3. Structure of Smart Long-term care system.
Figure 3. Structure of Smart Long-term care system.
Sustainability 14 08932 g003
Figure 4. The image on the left is an example of a sad feature map; in the middle and to the right are example diagrams of the characteristics of the concept of different ages.
Figure 4. The image on the left is an example of a sad feature map; in the middle and to the right are example diagrams of the characteristics of the concept of different ages.
Sustainability 14 08932 g004
Figure 5. Monitoring and control interface for emotional analysis of long-term care. (A) environment monitoring block; (B) user login block; (C) block of emotion prediction results; (D) block of instant emotion analysis of facial expressions; (E) AI model scheduling control.
Figure 5. Monitoring and control interface for emotional analysis of long-term care. (A) environment monitoring block; (B) user login block; (C) block of emotion prediction results; (D) block of instant emotion analysis of facial expressions; (E) AI model scheduling control.
Sustainability 14 08932 g005
Figure 6. HMI block E selects various AI Model.
Figure 6. HMI block E selects various AI Model.
Sustainability 14 08932 g006
Figure 7. The program diagram pertaining to environmental monitoring and control.
Figure 7. The program diagram pertaining to environmental monitoring and control.
Sustainability 14 08932 g007
Figure 8. System login situation.
Figure 8. System login situation.
Sustainability 14 08932 g008
Figure 9. The emotional prediction program diagram of Python model imported into LabVIEW calculation.
Figure 9. The emotional prediction program diagram of Python model imported into LabVIEW calculation.
Sustainability 14 08932 g009
Figure 10. Quantitative results of sentiment prediction.
Figure 10. Quantitative results of sentiment prediction.
Sustainability 14 08932 g010
Figure 11. Real-time facial expression monitoring and emotion analysis display.
Figure 11. Real-time facial expression monitoring and emotion analysis display.
Sustainability 14 08932 g011
Figure 12. Partial program diagram of facial expression real-time monitoring and emotion recognition analysis.
Figure 12. Partial program diagram of facial expression real-time monitoring and emotion recognition analysis.
Sustainability 14 08932 g012
Figure 13. Flow chart of design steps of the learning model.
Figure 13. Flow chart of design steps of the learning model.
Sustainability 14 08932 g013
Figure 14. convolutional layer and pooling layer construction process.
Figure 14. convolutional layer and pooling layer construction process.
Sustainability 14 08932 g014
Figure 15. CNN structure design diagram of this study.
Figure 15. CNN structure design diagram of this study.
Sustainability 14 08932 g015
Figure 16. The appearance of the designed system in practice.
Figure 16. The appearance of the designed system in practice.
Sustainability 14 08932 g016
Figure 17. The training model incorporates image processing numerical data and manual labels.
Figure 17. The training model incorporates image processing numerical data and manual labels.
Sustainability 14 08932 g017
Figure 18. Input pre-training image captured by CAM.
Figure 18. Input pre-training image captured by CAM.
Sustainability 14 08932 g018
Figure 19. The model design of convolution and pooling layer.
Figure 19. The model design of convolution and pooling layer.
Sustainability 14 08932 g019
Figure 20. The model design of flat layer and the fully connected layer.
Figure 20. The model design of flat layer and the fully connected layer.
Sustainability 14 08932 g020
Figure 22. Prediction of sad expression.
Figure 22. Prediction of sad expression.
Sustainability 14 08932 g022
Figure 23. Prediction of happy expression.
Figure 23. Prediction of happy expression.
Sustainability 14 08932 g023
Figure 24. Partial program of monitoring and real-time emotion recognition analysis.
Figure 24. Partial program of monitoring and real-time emotion recognition analysis.
Sustainability 14 08932 g024
Table 1. Statistical Table of Testing and Validation Data of Research Datasets.
Table 1. Statistical Table of Testing and Validation Data of Research Datasets.
Micro-ExpressionValidation DataTraining Data
(Classification)ElderOthersTotalElderOthersOthers
happy34512481593231567849099
anger2449761220104242535295
sadness2878811168215153827533
fear1938291022109747215818
disgust431531962828051087
surprise218716934112636864812
contempt2849961280197258197791
161457997413998531,45041,435
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yao, K.-C.; Huang, W.-T.; Chen, T.-Y.; Wu, C.-C.; Ho, W.-S. Establishing an Intelligent Emotion Analysis System for Long-Term Care Application Based on LabVIEW. Sustainability 2022, 14, 8932. https://doi.org/10.3390/su14148932

AMA Style

Yao K-C, Huang W-T, Chen T-Y, Wu C-C, Ho W-S. Establishing an Intelligent Emotion Analysis System for Long-Term Care Application Based on LabVIEW. Sustainability. 2022; 14(14):8932. https://doi.org/10.3390/su14148932

Chicago/Turabian Style

Yao, Kai-Chao, Wei-Tzer Huang, Teng-Yu Chen, Cheng-Chun Wu, and Wei-Sho Ho. 2022. "Establishing an Intelligent Emotion Analysis System for Long-Term Care Application Based on LabVIEW" Sustainability 14, no. 14: 8932. https://doi.org/10.3390/su14148932

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop