Application Research Using AI, IoT, HCI, and Big Data Technologies

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (10 September 2023) | Viewed by 28199

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science, Semyung University, Jecheon 27136, Korea
Interests: machine learning; deep learning; big data; recommender system; multidimensional analysis

E-Mail Website
Guest Editor
School of Computer, Semyung University, Jecheon 27100, Korea
Interests: programmable logic devices; Boolean functions; ad hoc networks

Special Issue Information

Dear Colleagues,

Recently, AI, big data, IoT, and HCI technologies have been used throughout the industry as core technologies of the Fourth Industrial Revolution. Many researchers are attempting to create a convergence with the latest technology in order to achieve innovative improvements in existing industries. In the industry, numerous data are generated from the IoT and from sensors, and big data technology is required to efficiently process and manage such data. Models necessary for the industry are created and used through data modeling and deep learning or machine learning. In addition, more convenient interaction technologies are being developed through the fusion of artificial intelligence and HCI technologies. This Special Issue aims to publish various research materials on convergence and application technologies based on AI, machine learning, deep learning, computer vision, big data, IoT, and HCI.

- AI technologies, models, algorithms;

- Deep learning, machine learning, pattern recognition;

- Deep learning-based industrial application research;

- Data science, data mining;

- Applications and platforms using AI and big data;

- Tools and systems for big data;

- Smart computing and edge computing;

- IoT, AIoT, and various sensor data processing and application;

- Technology and application using HCI and AI.

Prof. Dr. Suan Lee
Prof. Dr. Chi-ho Lin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • deep learning
  • IoT
  • HCI
  • big data

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 1146 KiB  
Article
BioEdge: Accelerating Object Detection in Bioimages with Edge-Based Distributed Inference
by Hyunho Ahn, Munkyu Lee, Sihoon Seong, Minhyeok Lee, Gap-Joo Na, In-Geol Chun, Youngpil Kim and Cheol-Ho Hong
Electronics 2023, 12(21), 4544; https://doi.org/10.3390/electronics12214544 - 05 Nov 2023
Cited by 1 | Viewed by 1081
Abstract
Convolutional neural networks (CNNs) have enabled effective object detection tasks in bioimages. Unfortunately, implementing such an object detection model can be computationally intensive, especially on resource-limited hardware in a laboratory or hospital setting. This study aims to develop a framework called BioEdge that [...] Read more.
Convolutional neural networks (CNNs) have enabled effective object detection tasks in bioimages. Unfortunately, implementing such an object detection model can be computationally intensive, especially on resource-limited hardware in a laboratory or hospital setting. This study aims to develop a framework called BioEdge that can accelerate object detection using Scaled-YOLOv4 and YOLOv7 by leveraging edge computing for bioimage analysis. BioEdge employs a distributed inference technique with Scaled-YOLOv4 and YOLOv7 to harness the computational resources of both a local computer and an edge server, enabling rapid detection of COVID-19 abnormalities in chest radiographs. By implementing distributed inference techniques, BioEdge addresses privacy concerns that can arise when transmitting biomedical data to an edge server. Additionally, it incorporates a computationally lightweight autoencoder at the split point to reduce data transmission overhead. For evaluation, this study utilizes the COVID-19 dataset provided by the Society for Imaging Informatics in Medicine (SIIM). BioEdge is shown to improve the inference latency of Scaled-YOLOv4 and YOLOv7 by up to 6.28 times with negligible accuracy loss compared to local computer execution in our evaluation setting. Full article
(This article belongs to the Special Issue Application Research Using AI, IoT, HCI, and Big Data Technologies)
Show Figures

Figure 1

14 pages, 691 KiB  
Article
Efficient ϵ-Approximate k-Flexible Aggregate Nearest Neighbor Search for Arbitrary ϵ in Road Networks
by Hyuk-Yoon Kwon, Jaejun Yoo and Woong-Kee Loh
Electronics 2023, 12(17), 3622; https://doi.org/10.3390/electronics12173622 - 27 Aug 2023
Viewed by 543
Abstract
Recently, complicated spatial search algorithms have emerged as spatial-information-based applications, such as location-based services (LBS), and have become very diverse and frequent. The aggregate nearest neighbor (ANN) search is an extension of the existing nearest neighbor (NN) search; it finds the object [...] Read more.
Recently, complicated spatial search algorithms have emerged as spatial-information-based applications, such as location-based services (LBS), and have become very diverse and frequent. The aggregate nearest neighbor (ANN) search is an extension of the existing nearest neighbor (NN) search; it finds the object p* that minimizes G{d(p*,qi),qiQ} from a set Q of M (≥1) query objects, where G is an aggregate function and d() is the distance between two objects. The flexible aggregate nearest neighbor (FANN) search is an extension of the ANN search by introducing flexibility factor ϕ(0<ϕ1); it finds the object p* that minimizes G{d(p*,qi),qiQϕ} from Qϕ, a subset of Q with |Qϕ|=ϕ|Q|. This paper proposes an efficient ϵ-approximate k-FANN (k1) search algorithm for an arbitrary approximation ratio ϵ (≥1) in road networks. In general, ϵ-approximate algorithms are expected to give an improved search performance at the cost of allowing an error ratio of up to the given ϵ. Since the optimal value of ϵ varies greatly depending on applications and cases, the approximate algorithm for an arbitrary ϵ is essential. We prove that the error ratios of the approximate FANN objects returned by our algorithm do not exceed the given ϵ. To the best of our knowledge, our algorithm is the first ϵ-approximate k-FANN search algorithm in road networks for an arbitrary ϵ. Through a series of experiments using various real-world road network datasets, we demonstrated that our approximate algorithm always outperformed the previous state-of-the-art exact algorithm and that the error ratios of the approximate FANN objects were significantly lower than the given ϵ value. Full article
(This article belongs to the Special Issue Application Research Using AI, IoT, HCI, and Big Data Technologies)
Show Figures

Figure 1

26 pages, 1389 KiB  
Article
MAGNETO and DeepInsight: Extended Image Translation with Semantic Relationships for Classifying Attack Data with Machine Learning Models
by Aeryn Dunmore, Adam Dunning, Julian Jang-Jaccard, Fariza Sabrina and Jin Kwak
Electronics 2023, 12(16), 3463; https://doi.org/10.3390/electronics12163463 - 15 Aug 2023
Cited by 2 | Viewed by 1143
Abstract
The translation of traffic flow data into images for the purposes of classification in machine learning tasks has been extensively explored in recent years. However, the method of translation has a significant impact on the success of such attempts. In 2019, a method [...] Read more.
The translation of traffic flow data into images for the purposes of classification in machine learning tasks has been extensively explored in recent years. However, the method of translation has a significant impact on the success of such attempts. In 2019, a method called DeepInsight was developed to translate genetic information into images. It was then adopted in 2021 for the purpose of translating network traffic into images, allowing the retention of semantic data about the relationships between features, in a model called MAGNETO. In this paper, we explore and extend this research, using the MAGNETO algorithm on three new intrusion detection datasets—CICDDoS2019, 5G-NIDD, and BOT-IoT—and also extend this method into the realm of multiclass classification tasks using first a One versus Rest model, followed by a full multiclass classification task, using multiple new classifiers for comparison against the CNNs implemented by the original MAGNETO model. We have also undertaken comparative experiments on the original MAGNETO datasets, CICIDS17, KDD99, and UNSW-NB15, as well as a comparison for other state-of-the-art models using the NSL-KDD dataset. The results show that the MAGNETO algorithm and the DeepInsight translation method, without the use of data augmentation, offer a significant boost to accuracy when classifying network traffic data. Our research also shows the effectiveness of Decision Tree and Random Forest classifiers on this type of data. Further research into the potential for real-time execution is needed to explore the possibilities for extending this method of translation into real-world scenarios. Full article
(This article belongs to the Special Issue Application Research Using AI, IoT, HCI, and Big Data Technologies)
Show Figures

Figure 1

17 pages, 7277 KiB  
Article
Ensemble Prediction Model for Dust Collection Efficiency of Wet Electrostatic Precipitator
by Sugi Choi, Sunghwan Kim and Haiyoung Jung
Electronics 2023, 12(12), 2579; https://doi.org/10.3390/electronics12122579 - 07 Jun 2023
Cited by 2 | Viewed by 1293
Abstract
WESPs (Wet Electrostatic precipitators) are mainly installed in industries and factories where PM (particulate matter) is primarily generated. Such a wet type WESPs exhibits very excellent performance by showing a PM collection efficiency of 97 to 99%, but the PM collection efficiency may [...] Read more.
WESPs (Wet Electrostatic precipitators) are mainly installed in industries and factories where PM (particulate matter) is primarily generated. Such a wet type WESPs exhibits very excellent performance by showing a PM collection efficiency of 97 to 99%, but the PM collection efficiency may decrease rapidly due to a situation in which the dust collector and the discharge electrode is corroded by water. Thus, developing technology to predict efficient PM collection in the design and operation of WESPs is critical. Previous studies have mainly developed machine learning-based models to predict atmospheric PM concentrations using data measured by meteorological agencies. However, the analysis of models for predicting the dust collection efficiency of WESPs installed in factories and industrial facilities is insufficient. In this study, a WESPs was installed, and PM collection experiments were conducted. Nonlinear data such as operating conditions and PM measurements were collected, and ensemble PM collection efficiency prediction models were developed. According to the research results, the random forest model yielded excellent performance, with the best results achieved when the target was PM 7: R2, MAE, and MSE scores of 0.956, 0.747, and 1.748, respectively. Full article
(This article belongs to the Special Issue Application Research Using AI, IoT, HCI, and Big Data Technologies)
Show Figures

Figure 1

21 pages, 4296 KiB  
Article
k-NN Query Optimization for High-Dimensional Index Using Machine Learning
by Dojin Choi, Jiwon Wee, Sangho Song, Hyeonbyeong Lee, Jongtae Lim, Kyoungsoo Bok and Jaesoo Yoo
Electronics 2023, 12(11), 2375; https://doi.org/10.3390/electronics12112375 - 24 May 2023
Viewed by 1218
Abstract
In this study, we propose three k-nearest neighbor (k-NN) optimization techniques for a distributed, in-memory-based, high-dimensional indexing method to speed up content-based image retrieval. The proposed techniques perform distributed, in-memory, high-dimensional indexing-based k-NN query optimization: a density-based optimization technique that performs k-NN optimization [...] Read more.
In this study, we propose three k-nearest neighbor (k-NN) optimization techniques for a distributed, in-memory-based, high-dimensional indexing method to speed up content-based image retrieval. The proposed techniques perform distributed, in-memory, high-dimensional indexing-based k-NN query optimization: a density-based optimization technique that performs k-NN optimization using data distribution; a cost-based optimization technique using query processing cost statistics; and a learning-based optimization technique using a deep learning model, based on query logs. The proposed techniques were implemented on Spark, which supports a master/slave model for large-scale distributed processing. We showed the superiority and validity of the proposed techniques through various performance evaluations, based on high-dimensional data. Full article
(This article belongs to the Special Issue Application Research Using AI, IoT, HCI, and Big Data Technologies)
Show Figures

Figure 1

14 pages, 1644 KiB  
Article
Deep Learning-Based Context-Aware Recommender System Considering Change in Preference
by Soo-Yeon Jeong and Young-Kuk Kim
Electronics 2023, 12(10), 2337; https://doi.org/10.3390/electronics12102337 - 22 May 2023
Cited by 2 | Viewed by 1482
Abstract
In order to predict and recommend what users want, users’ information is required, and more information is required to improve the performance of the recommender system. As IoT devices and smartphones have made it possible to know the user’s context, context-aware recommender systems [...] Read more.
In order to predict and recommend what users want, users’ information is required, and more information is required to improve the performance of the recommender system. As IoT devices and smartphones have made it possible to know the user’s context, context-aware recommender systems have emerged to predict preferences by considering the user’s context. A context-aware recommender system uses contextual information such as time, weather, and location to predict preferences. However, a user’s preferences are not always the same in a given context. They may follow trends or make different choices due to changes in their personal environment. Therefore, in this paper, we propose a context-aware recommender system that considers the change in users’ preferences over time. The proposed method is a context-aware recommender system that uses Matrix Factorization with a preference transition matrix to capture and reflect the changes in users’ preferences. To evaluate the performance of the proposed method, we compared the performance with the traditional recommender system, context-aware recommender system, and dynamic recommender system, and confirmed that the performance of the proposed method is better than the existing methods. Full article
(This article belongs to the Special Issue Application Research Using AI, IoT, HCI, and Big Data Technologies)
Show Figures

Figure 1

19 pages, 8943 KiB  
Article
Dynamic Characteristics Prediction Model for Diesel Engine Valve Train Design Parameters Based on Deep Learning
by Wookey Lee, Tae-Yun Jung and Suan Lee
Electronics 2023, 12(8), 1806; https://doi.org/10.3390/electronics12081806 - 11 Apr 2023
Cited by 2 | Viewed by 1944
Abstract
This paper presents a comprehensive study on the utilization of machine learning and deep learning techniques to predict the dynamic characteristics of design parameters, exemplified by a diesel engine valve train. The research aims to address the challenging and time-consuming analysis required to [...] Read more.
This paper presents a comprehensive study on the utilization of machine learning and deep learning techniques to predict the dynamic characteristics of design parameters, exemplified by a diesel engine valve train. The research aims to address the challenging and time-consuming analysis required to optimize the performance and durability of valve train components, which are influenced by numerous factors. To this end, dynamic analyses data have been collected for diesel engine specifications and used to construct a regression prediction model using a gradient boosting regressor tree (GBRT), a deep neural network (DNN), a one-dimensional convolution neural network (1D-CNN), and long short-term memory (LSTM). The prediction model was utilized to estimate the force and valve seating velocity values of the valve train system. The dynamic characteristics of the case were evaluated by comparing the actual and predicted values. The results showed that the GBRT model had an R2 value of 0.90 for the valve train force and 0.97 for the valve seating velocity, while the 1D-CNN model had an R2 value of 0.89 for the valve train force and 0.98 for the valve seating velocity. The results of this study have important implications for advancing the design and development of efficient and reliable diesel engines. Full article
(This article belongs to the Special Issue Application Research Using AI, IoT, HCI, and Big Data Technologies)
Show Figures

Figure 1

15 pages, 12322 KiB  
Communication
Automated Guided Vehicle (AGV) Driving System Using Vision Sensor and Color Code
by Jun-Yeong Jang, Su-Jeong Yoon and Chi-Ho Lin
Electronics 2023, 12(6), 1415; https://doi.org/10.3390/electronics12061415 - 16 Mar 2023
Cited by 2 | Viewed by 3485
Abstract
Recently, the use of Automated Guided Vehicles (AGVs) at production sites has been increasing due to industrial development, such as the introduction of smart factories. AGVs utilizing the wired induction method, which is cheaper and faster than the wireless induction method, are mainly [...] Read more.
Recently, the use of Automated Guided Vehicles (AGVs) at production sites has been increasing due to industrial development, such as the introduction of smart factories. AGVs utilizing the wired induction method, which is cheaper and faster than the wireless induction method, are mainly used at production sites. However, the wired guidance AGV operation system has the disadvantage of being limited to small-batch or collaboration-based production sites, since it is difficult to change the driving route. In this paper, we propose an AGV line-scan algorithm that can perform route recognition, driving commands, and operation through color-code recognition using an Arduino controller and a low-cost vision sensor, instead of the optical sensor conventionally used for these functions. When the proposed algorithm is applied to the AGV car, the CMUcam 5 Pixy2 camera identifies the driving path to follow by tracking a black line using the Otsu method. In addition, it can be confirmed that the driving command is executed using the proposed color code by applying the color recognition function of the CMUcam 5 Pixy2. Full article
(This article belongs to the Special Issue Application Research Using AI, IoT, HCI, and Big Data Technologies)
Show Figures

Figure 1

19 pages, 6415 KiB  
Article
Rotor Fault Diagnosis Method Using CNN-Based Transfer Learning with 2D Sound Spectrogram Analysis
by Haiyoung Jung, Sugi Choi and Bohee Lee
Electronics 2023, 12(3), 480; https://doi.org/10.3390/electronics12030480 - 17 Jan 2023
Cited by 7 | Viewed by 1633
Abstract
This study discusses a failure detection algorithm that uses frequency analysis and artificial intelligence to determine whether a rotor used in an industrial setting has failed. A rotor is a standard component widely used in industrial sites, and continuous friction and corrosion frequently [...] Read more.
This study discusses a failure detection algorithm that uses frequency analysis and artificial intelligence to determine whether a rotor used in an industrial setting has failed. A rotor is a standard component widely used in industrial sites, and continuous friction and corrosion frequently result in motor and bearing failures. As workers inspecting failure directly are at risk of serious accidents, an automated environment that can operate unmanned and a system for accurate failure determination are required. This study proposes an algorithm to detect faults by introducing convolutional neural networks (CNNs) after converting the fault sound from the rotor into a spectrogram through STFT analysis and visually processing it. A binary classifier for distinguishing between normal and failure states was added to the output part of the neural network structure used, which was based on the transfer learning methodology. We mounted the proposed structure on a designed embedded system to conduct performance discrimination experiments and analyze various outcome indicators using real-world fault data from various situations. The analysis revealed that failure could be detected in response to various normal and fault sounds of the field system and that both training and validation accuracy were greater than 99%. We further intend to investigate artificial intelligence algorithms that train and learn by classifying fault types into early, middle, and late stages to identify more specific faults. Full article
(This article belongs to the Special Issue Application Research Using AI, IoT, HCI, and Big Data Technologies)
Show Figures

Figure 1

19 pages, 13462 KiB  
Article
Robust and Lightweight Deep Learning Model for Industrial Fault Diagnosis in Low-Quality and Noisy Data
by Jaegwang Shin and Suan Lee
Electronics 2023, 12(2), 409; https://doi.org/10.3390/electronics12020409 - 13 Jan 2023
Cited by 5 | Viewed by 1889
Abstract
Machines in factories are typically operated 24 h a day to support production, which may result in malfunctions. Such mechanical malfunctions may disrupt factory output, resulting in financial losses or human casualties. Therefore, we investigate a deep learning model that can detect abnormalities [...] Read more.
Machines in factories are typically operated 24 h a day to support production, which may result in malfunctions. Such mechanical malfunctions may disrupt factory output, resulting in financial losses or human casualties. Therefore, we investigate a deep learning model that can detect abnormalities in machines based on the operating noise. Various data preprocessing methods, including the discrete wavelet transform, the Hilbert transform, and short-time Fourier transform, were applied to extract characteristics from machine-operating noises. To create a model that can be used in factories, the environment of real factories was simulated by introducing noise and quality degradation to the sound dataset for Malfunctioning Industrial Machine Investigation and Inspection (MIMII). Thus, we proposed a lightweight model that runs reliably even in noisy and low-quality sound data environments, such as a real factory. We propose a Convolutional Neural Network–Long Short-Term Memory (CNN–LSTM) model using Short-Time Fourier Transforms (STFTs), and the proposed model can be very effective in terms of application because it is a lightweight model that requires only about 6.6% of the number of parameters used in the underlying CNN, and has only a performance difference within 0.5%. Full article
(This article belongs to the Special Issue Application Research Using AI, IoT, HCI, and Big Data Technologies)
Show Figures

Figure 1

0 pages, 9870 KiB  
Article
Detection Model of Hangul Stroke Elements: Expansion of Non-Structured Font and Influence Evaluation by Stroke Element Combinations
by Soon-Bum Lim, Jongwoo Lee, Xiaotong Zhao and Yoojeong Song
Electronics 2023, 12(2), 383; https://doi.org/10.3390/electronics12020383 - 12 Jan 2023
Cited by 5 | Viewed by 1626
Abstract
With the increase of various media, fonts continue to be newly developed. In Korea, numerous ‘Hangul’ fonts are also being developed, and as a result, the need for research on determining the similarity between fonts is emerging. For example, when creating a document, [...] Read more.
With the increase of various media, fonts continue to be newly developed. In Korea, numerous ‘Hangul’ fonts are also being developed, and as a result, the need for research on determining the similarity between fonts is emerging. For example, when creating a document, the font to be used must be downloaded from each computing environment. However, this is a very cumbersome process. If there is a font that is not supported in the system, the above problem can be easily solved by recommending the most similar font that can replace it. According to this need, we conducted various prior studies for similar font recommendations. As a result, we developed a ‘stroke element’ that exists in each consonant and vowel in Korean font and developed a font recommendation model using a stroke element. However, there is a limitation in that the existing research was studied only for the structured fonts corresponding to the printed type. Additionally, the font size was not considered in the font recommendation. In this study, two experiments were conducted to expand the font recommendation model by supplementing the limitations of existing studies. First, in order to enable similar font recommendations based on the stroke element even in fonts with various shapes, the font was classified according to the shape, and the stroke elements in each classification were detected. Second, when the font sizes were different, the change in the font recommendations result based on the stroke element was analyzed. In conclusion, we found that it was necessary to find a plan to extract stroke elements for font recommendation of fonts that do not belong to standard fonts. In addition, since the influence of the stroke element varies depending on the size of the font, we propose a stroke element weight model that can be used for recommendation by reflecting it. Full article
(This article belongs to the Special Issue Application Research Using AI, IoT, HCI, and Big Data Technologies)
Show Figures

Figure 1

18 pages, 8205 KiB  
Article
Text Network-Based Method for Measuring Hand Functions in Degenerative Brain Disease Patients
by Cholzi Kang, Jaehoon Kim, Hosang Moon and Sungtaek Chung
Electronics 2023, 12(2), 340; https://doi.org/10.3390/electronics12020340 - 09 Jan 2023
Viewed by 1008
Abstract
In this study, we collected various past study results on tools and analytical methods for measuring hand functions of patients with degenerative brain diseases, such as Parkinson’s disease and stroke, and selected and proposed appropriate hand function measurement tools, methods, and analysis software [...] Read more.
In this study, we collected various past study results on tools and analytical methods for measuring hand functions of patients with degenerative brain diseases, such as Parkinson’s disease and stroke, and selected and proposed appropriate hand function measurement tools, methods, and analysis software based on text network analysis. We searched the literatures using keywords related to degenerative brain disease and stroke patients for participant types, use of devices and sensors for the intervention types, and hand function assessment for measurement types. Among the 2484 literatures collected, 19 were eventually selected based on certain inclusion and exclusion criteria. As a result of text network analysis, the degree-centrality and the betweenness centrality were the highest in the keyword of Parkinson’s disease for the participant type, force sensor for the intervention type, and finger tapping for the measurement type. Based on these results, pinch gloves comprising an FSR sensor were manufactured, and software and contents were implemented to measure and analyze various quantitative parameter values during finger tapping. The software can evaluate endurance and agility by measuring the finger-tapping intensity and operation time using the index finger and thumb. The contents can evaluate the stability of hand functions by analyzing the coefficient of variation of the tapping interval, the average contact time, and the accuracy of hand functions by analyzing the reaction rate to the presented visual stimulus. As a result of comparing hand functions through 10 types of analysis parameters with a sample of 12 ordinary subjects (8 men and 4 women) using the manufactured pinch gloves, there was a difference between the two genders in the items evaluating muscle strength and agility, and a significant difference in the analysis parameters evaluating stability and accuracy. The results indicate that using the text network analysis-based hand function measurement tool and the method proposed in this study should help derive the objective research results as well as a quantitative comparison of research results of various researchers. Full article
(This article belongs to the Special Issue Application Research Using AI, IoT, HCI, and Big Data Technologies)
Show Figures

Figure 1

17 pages, 2723 KiB  
Article
Recommendation of Music Based on DASS-21 (Depression, Anxiety, Stress Scales) Using Fuzzy Clustering
by Eunyoung Wang, Hyeokmin Lee, Kyunghee Do, Moonhwan Lee and Sungtaek Chung
Electronics 2023, 12(1), 168; https://doi.org/10.3390/electronics12010168 - 30 Dec 2022
Cited by 5 | Viewed by 2834
Abstract
The present study proposes a music recommendation service in a mobile environment using the DASS-21 questionnaire to distinguish and measure certain psychological state instability symptoms—viz. anxiety, depression, and stress—that anyone can experience regardless of job or age. In general, the outcome of the [...] Read more.
The present study proposes a music recommendation service in a mobile environment using the DASS-21 questionnaire to distinguish and measure certain psychological state instability symptoms—viz. anxiety, depression, and stress—that anyone can experience regardless of job or age. In general, the outcome of the DASS-21 from almost every participant did not reveal any single psychological state out of the abovementioned three states. Therefore, the weighted scores were calculated for each scale and fuzzy clustering was used to cluster users into groups with similar states. For the initial dataset’s generation, we used the DASS inventory collected from the Open-Source Psychometrics Project conducted from 2017 to 2019 on approximately 39,000 respondents, and the results of the survey showed that the average scores for each scale were 23.6 points for depression, 17.4 for anxiety, and 23.3 for stress. Based on the datasets collected from fuzzy clustering, the individuals were classified into three groups: Group 1 was recommended with music for “high” depression, “high” anxiety, and “low” stress; Group 2 was recommended with music for “normal” depression, “low” anxiety, and “normal” stress; and Group 3 was recommended with music for “high” depression, “high” anxiety, and “high” stress. Especially, the largest numbers of recommended music in the three groups were for Group 1 with “High” depressive (4.64), Group 2 for “Low” anxiety (4.54), and Group 3 for “High” anxiety (4.76). In addition, to compare the results of fuzzy clustering with other data, the silhouette coefficient of the samples extracted with the same severity ratio and those generated by simple random sampling were 0.641 and 0.586, respectively, which were greater than 0. The proposed service can recommend not only the music of users with similar trends at all psychological states, but also the music of users with similar psychological states in part. Full article
(This article belongs to the Special Issue Application Research Using AI, IoT, HCI, and Big Data Technologies)
Show Figures

Figure 1

23 pages, 6104 KiB  
Article
Development of an Electrooculogram (EOG) and Surface Electromyogram (sEMG)-Based Human Computer Interface (HCI) Using a Bone Conduction Headphone Integrated Bio-Signal Acquisition System
by Ha Na Jo, Sung Woo Park, Han Gyeol Choi, Seok Hyun Han and Tae Seon Kim
Electronics 2022, 11(16), 2561; https://doi.org/10.3390/electronics11162561 - 16 Aug 2022
Cited by 1 | Viewed by 1975
Abstract
Human–computer interface (HCI) methods based on the electrooculogram (EOG) signals generated from eye movement have been continuously studied because they can transmit the commands to a computer or machine without using both arms. However, usability and appearance are the big obstacles to practical [...] Read more.
Human–computer interface (HCI) methods based on the electrooculogram (EOG) signals generated from eye movement have been continuously studied because they can transmit the commands to a computer or machine without using both arms. However, usability and appearance are the big obstacles to practical applications since conventional EOG-based HCI methods require skin electrodes outside the eye near the lateral and medial canthus. To solve these problems, in this paper, we report development of an HCI method that can simultaneously acquire EOG and surface-electromyogram (sEMG) signals through electrodes integrated into bone conduction headphones and transmit the commands through the horizontal eye movements and various biting movements. The developed system can classify the position of the eyes by dividing the 80-degree range (from −40 degrees to the left to +40 degrees to the right) into 20-degree sections and can also recognize the three biting movements based on the bio-signals obtained from the three electrodes, so a total of 11 commands can be delivered to a computer or machine. The experimental results showed the interface has accuracy of 92.04% and 96.10% for EOG signal-based commands and sEMG signal-based commands, respectively. As for the results of virtual keyboard interface application, the accuracy was 97.19%, the precision was 90.51%, and the typing speed was 5.75–18.97 letters/min. The proposed interface system can be applied to various HCI and HMI fields as well as virtual keyboard applications. Full article
(This article belongs to the Special Issue Application Research Using AI, IoT, HCI, and Big Data Technologies)
Show Figures

Figure 1

24 pages, 3857 KiB  
Article
Implementation of Voice-Based Report Generator Application for Visually Impaired
by Jungyoon Choi, Yoojeong Song and Jongwoo Lee
Electronics 2022, 11(12), 1847; https://doi.org/10.3390/electronics11121847 - 10 Jun 2022
Cited by 1 | Viewed by 2153
Abstract
The college entrance rate of the disabled is gradually increasing, and each university is trying to provide equal rights and opportunities for college students with disabilities. However, students with disabilities still have difficulty adapting to college life due to limitations in the range [...] Read more.
The college entrance rate of the disabled is gradually increasing, and each university is trying to provide equal rights and opportunities for college students with disabilities. However, students with disabilities still have difficulty adapting to college life due to limitations in the range of experience and diversity, restrictions in walking ability, and restrictions on interaction with the environment. Visually impaired students cannot perform tasks given by universities independently without the help of others, but universities do not have a system that is helpful except for supporting helpers. Therefore, in this paper, we aimed to develop independent report generation software, VTR4VI (Voice to Report program for the Visually Impaired) for students with visual impairment by using mobile devices that are always in possession. Since the existing speech recognition document editing software is designed for non-visually impaired people, it is difficult for the visually impaired to use. Accordingly, the requirements of a report generator for blind students were identified so blind students could freely perform assignments or write reports without helpers, just like non-visually impaired students. This software can be easily used by clicking on the Bluetooth remote control instead of touching the phone screen, and the operation is simple. As a result of our usability evaluation, our VTR4VI will surely help the visually impaired to study and make a written report. Full article
(This article belongs to the Special Issue Application Research Using AI, IoT, HCI, and Big Data Technologies)
Show Figures

Figure 1

Back to TopTop