Feature Papers in Computer Science & Engineering

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (15 December 2023) | Viewed by 180368

Special Issue Editors


E-Mail Website
Guest Editor
1. BISITE Research Group, University of Salamanca, 37007 Salamanca, Spain
2. Air Institute, IoT Digital Innovation Hub, 37188 Salamanca, Spain
3. Department of Electronics, Information and Communication, Faculty of Engineering, Osaka Institute of Technology, Osaka 535-8585, Japan
Interests: artificial intelligence; smart cities; smart grids
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Sciences, Western Illinois University, Macomb, IL 61455, USA
Interests: service robots; IoT; social media; big data; metaverse
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Engineering, Tokushima University, Tokushima 770-8501, Japan
Interests: language understanding and commnication; affective computing; computer science; intelligent robot; social computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues, 

We are pleased to announce that the Section Computer Science and Engineering is compiling a collection of papers submitted by our Section’s Editorial Board members and leading scholars in this field of research. We welcome contributions as well as recommendations from Editorial Board members.

The aim of this Special Issue is to publish a set of papers that characterize the best original articles, including in-depth reviews of the state of the art and original and very up-to-date contributions involving the use of intelligent models and/or the IoT in sectors of interest. Anything that brings innovative elements and is related to Deeptech is welcome.  We hope that these articles will be widely read and have a great influence on the field. All articles in this Special Issue will be compiled in a print edition book after the deadline and will be appropriately promoted.

Topics of interest are all those involving advanced intelligent models and their applications in areas such as:

  • IoT and its applications
  • Industry 4.0
  • Smart cities
  • Biotechnology
  • Precision agriculture
  • Fintech
  • Quantum economy
  • Blockchain
  • Cybersecurity
  • Big data analytics and artificial intelligence

Prof. Dr. Juan M. Corchado
Prof. Dr. Byung-Gyu Kim
Dr. Carlos A. Iglesias
Prof. Dr. In Lee
Prof. Dr. Fuji Ren
Prof. Dr. Rashid Mehmood
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (74 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 675 KiB  
Article
A Metric Learning Perspective on the Implicit Feedback-Based Recommendation Data Imbalance Problem
by Weiming Huang, Baisong Liu and Zhaoliang Wang
Electronics 2024, 13(2), 419; https://doi.org/10.3390/electronics13020419 - 19 Jan 2024
Viewed by 658
Abstract
Paper recommendation systems are important for alleviating academic information overload. Such systems provide personalized recommendations based on implicit feedback from users, supplemented by their subject information, citation networks, etc. However, such recommender systems face problems like data sparsity for positive samples and uncertainty [...] Read more.
Paper recommendation systems are important for alleviating academic information overload. Such systems provide personalized recommendations based on implicit feedback from users, supplemented by their subject information, citation networks, etc. However, such recommender systems face problems like data sparsity for positive samples and uncertainty for negative samples. In this paper, we address these two issues and improve upon them from the perspective of metric learning. The algorithm is modeled as a push–pull loss function. For the positive sample pull-out operation, we introduce a context factor, which accelerates the convergence of the objective function through the multiplication rule to alleviate the data sparsity problem. For the negative sample push operation, we adopt an unbiased global negative sample method and use an intermediate matrix caching method to greatly reduce the computational complexity. Experimental results on two real datasets show that our method outperforms other baseline methods in terms of recommendation accuracy and computational efficiency. Moreover, our metric learning method that introduces context improves by more than 5% over the element-wise alternating least squares method. We demonstrate the potential of metric learning in addressing the problem of implicit feedback recommender systems with positive and negative sample imbalances. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

31 pages, 1740 KiB  
Article
Automated Over-the-Top Service Copyright Distribution Management System Using the Open Digital Rights Language
by Wooyoung Son, Soonhong Kwon, Sungheun Oh and Jong-Hyouk Lee
Electronics 2024, 13(2), 336; https://doi.org/10.3390/electronics13020336 - 12 Jan 2024
Viewed by 653
Abstract
As the demand and diversity of digital content increase, consumers now have simple and easy access to digital content through Over-the-Top (OTT) services. However, the rights of copyright holders remain unsecured due to issues with illegal copying and distribution of digital content, along [...] Read more.
As the demand and diversity of digital content increase, consumers now have simple and easy access to digital content through Over-the-Top (OTT) services. However, the rights of copyright holders remain unsecured due to issues with illegal copying and distribution of digital content, along with unclear practices in copyright royalty settlements and distributions. In response, this paper proposes an automated OTT service copyright distribution management system using the Open Digital Rights Language (ODRL) to safeguard the rights of copyright holders in the OTT service field. The proposed system ensures that the rights to exercise copyright transactions and agreements, such as trading of copyright, can only be carried out when all copyright holders of a single digital content agree based on the Threshold Schnorr Digital Signature. This approach takes into account multiple joint copyright holders, thereby safeguarding their rights. Furthermore, it ensures fair and transparent distribution of copyright royalties based on the ratio information outlined in ODRL. From the user’s perspective, the system not only provides services proactively based on the rights information specified in ODRL, but also employs zero-knowledge proof technology to handle sensitive information in OTT service copyright distribution, thereby addressing existing privacy concerns. This approach not only considers joint copyright holders, but also demonstrates its effectiveness in resolving prevalent issues in current OTT services, such as illegal digital content replication and distribution, and the unfair settlement and distribution of copyright royalties. Applying this proposed system to the existing OTT services and digital content market is expected to lead to the revitalization of the digital content trading market and the establishment of an OTT service environment that guarantees both vitality and reliability. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

19 pages, 2995 KiB  
Article
An Electrocardiogram Classification Using a Multiscale Convolutional Causal Attention Network
by Chaoqun Guo, Bo Yin and Jianping Hu
Electronics 2024, 13(2), 326; https://doi.org/10.3390/electronics13020326 - 12 Jan 2024
Viewed by 673
Abstract
Electrocardiograms (ECGs) play a pivotal role in the diagnosis and prediction of cardiovascular diseases (CVDs). However, traditional methods for ECG classification involve intricate signal processing steps, leading to high design costs. Addressing this concern, this study introduces the Multiscale Convolutional Causal Attention network [...] Read more.
Electrocardiograms (ECGs) play a pivotal role in the diagnosis and prediction of cardiovascular diseases (CVDs). However, traditional methods for ECG classification involve intricate signal processing steps, leading to high design costs. Addressing this concern, this study introduces the Multiscale Convolutional Causal Attention network (MSCANet), which utilizes a multiscale convolutional neural network combined with causal convolutional attention mechanisms for ECG signal classification from the PhysioNet MIT-BIH Arrhythmia database. Simultaneously, the dataset is balanced by downsampling the majority class and oversampling the minority class using the Synthetic Minority Oversampling Technique (SMOTE), effectively categorizing the five heartbeat types in the test dataset. The experimental results showcase the classifier’s performance, evaluated through accuracy, precision, sensitivity, and F1-score and culminating in an overall accuracy of 99.35%, precision of 96.55%, sensitivity of 96.73%, and an F1-recall of 96.63%, surpassing existing methods. Simultaneously, the application of this innovative data balancing technique significantly addresses the issue of data imbalance. Compared to the data before balancing, there was a significant improvement in accuracy for the S-class and the F-class, with increases of approximately 8% and 13%, respectively. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

30 pages, 7882 KiB  
Article
Unsupervised Multiview Fuzzy C-Means Clustering Algorithm
by Ishtiaq Hussain, Kristina P. Sinaga and Miin-Shen Yang
Electronics 2023, 12(21), 4467; https://doi.org/10.3390/electronics12214467 - 30 Oct 2023
Cited by 3 | Viewed by 1012
Abstract
The rapid development in information technology makes it easier to collect vast numbers of data through the cloud, internet and other sources of information. Multiview clustering is a significant way for clustering multiview data that may come from multiple ways. The fuzzy c-means [...] Read more.
The rapid development in information technology makes it easier to collect vast numbers of data through the cloud, internet and other sources of information. Multiview clustering is a significant way for clustering multiview data that may come from multiple ways. The fuzzy c-means (FCM) algorithm for clustering (single-view) datasets was extended to process multiview datasets in the literature, called the multiview FCM (MV-FCM). However, most of the MV-FCM clustering algorithms and their extensions in the literature need prior information about the number of clusters and are also highly influenced by initializations. In this paper, we propose a novel MV-FCM clustering algorithm with an unsupervised learning framework, called the unsupervised MV-FCM (U-MV-FCM), such that it can search an optimal number of clusters during the iteration process of the algorithm without giving the number of clusters a priori. It is also free of initializations and parameter selection. We then use three synthetic and six benchmark datasets to make comparisons between the proposed U-MV-FCM and other existing algorithms and to highlight its practical implications. The experimental results show that our proposed U-MV-FCM algorithm is superior and more useful for clustering multiview datasets. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

31 pages, 13108 KiB  
Article
Speech Emotion Recognition Using Convolutional Neural Networks with Attention Mechanism
by Konstantinos Mountzouris, Isidoros Perikos and Ioannis Hatzilygeroudis
Electronics 2023, 12(20), 4376; https://doi.org/10.3390/electronics12204376 - 23 Oct 2023
Cited by 1 | Viewed by 1827
Abstract
Speech emotion recognition (SER) is an interesting and difficult problem to handle. In this paper, we deal with it through the implementation of deep learning networks. We have designed and implemented six different deep learning networks, a deep belief network (DBN), a simple [...] Read more.
Speech emotion recognition (SER) is an interesting and difficult problem to handle. In this paper, we deal with it through the implementation of deep learning networks. We have designed and implemented six different deep learning networks, a deep belief network (DBN), a simple deep neural network (SDNN), an LSTM network (LSTM), an LSTM network with the addition of an attention mechanism (LSTM-ATN), a convolutional neural network (CNN), and a convolutional neural network with the addition of an attention mechanism (CNN-ATN), having in mind, apart from solving the SER problem, to test the impact of the attention mechanism on the results. Dropout and batch normalization techniques are also used to improve the generalization ability (prevention of overfitting) of the models as well as to speed up the training process. The Surrey Audio–Visual Expressed Emotion (SAVEE) database and the Ryerson Audio–Visual Database (RAVDESS) were used for the training and evaluation of our models. The results showed that the networks with the addition of the attention mechanism did better than the others. Furthermore, they showed that the CNN-ATN was the best among the tested networks, achieving an accuracy of 74% for the SAVEE database and 77% for the RAVDESS, and exceeding existing state-of-the-art systems for the same datasets. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 6054 KiB  
Article
Enhancement of Product-Inspection Accuracy Using Convolutional Neural Network and Laplacian Filter to Automate Industrial Manufacturing Processes
by Hyojae Jun and Im Y. Jung
Electronics 2023, 12(18), 3795; https://doi.org/10.3390/electronics12183795 - 07 Sep 2023
Cited by 1 | Viewed by 763
Abstract
The automation of the manufacturing process of printed circuit boards (PCBs) requires accurate PCB inspections, which in turn require clear images that accurately represent the product PCBs. However, if low-quality images are captured during the involved image-capturing process, accurate PCB inspections cannot be [...] Read more.
The automation of the manufacturing process of printed circuit boards (PCBs) requires accurate PCB inspections, which in turn require clear images that accurately represent the product PCBs. However, if low-quality images are captured during the involved image-capturing process, accurate PCB inspections cannot be guaranteed. Therefore, this study proposes a method to effectively detect defective images for PCB inspection. This method involves using a convolutional neural network (CNN) and a Laplacian filter to achieve a higher accuracy of the classification of the obtained images as normal and defective images than that obtained using existing methods, with the results showing an improvement of 11.87%. Notably, the classification accuracy obtained using both a CNN and Laplacian filter is higher than that obtained using only CNNs. Furthermore, applying the proposed method to images of computer components other than PCBs results in a 5.2% increase in classification accuracy compared with only using CNNs. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

19 pages, 664 KiB  
Article
Collaborative Mixture-of-Experts Model for Multi-Domain Fake News Detection
by Jian Zhao, Zisong Zhao, Lijuan Shi, Zhejun Kuang and Yazhou Liu
Electronics 2023, 12(16), 3440; https://doi.org/10.3390/electronics12163440 - 14 Aug 2023
Cited by 3 | Viewed by 1275
Abstract
With the widespread popularity of online social media, people have come to increasingly rely on it as an information and news source. However, the growing spread of fake news on the Internet has become a serious threat to cyberspace and society at large. [...] Read more.
With the widespread popularity of online social media, people have come to increasingly rely on it as an information and news source. However, the growing spread of fake news on the Internet has become a serious threat to cyberspace and society at large. Although a series of previous works have proposed various methods for the detection of fake news, most of these methods focus on single-domain fake-news detection, resulting in poor detection performance when considering real-world fake news with diverse news topics. Furthermore, any news content may belong to multiple domains. Therefore, detecting multi-domain fake news remains a challenging problem. In this study, we propose a multi-domain fake-news detection framework based on a mixture-of-experts model. The input text is fed to BertTokenizer and embeddings are obtained by jointly calling CLIP to obtain the fusion features. This avoids the introduction of noise and redundant features during feature fusion. We also propose a collaboration module, in which a sentiment module is used to analyze the inherent sentimental information of the text, and sentence-level and domain embeddings are used to form the collaboration module. This module can adaptively determine the weights of the expert models. Finally, the mixture-of-experts model, composed of TextCNN, is used to learn the features and construct a high-performance fake-news detection model. We conduct extensive experiments on the Weibo21 dataset, the results of which indicate that our multi-domain methods perform well, in comparison with baseline methods, on the Weibo21 dataset. Our proposed framework presents greatly improved multi-domain fake-news detection performance. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

23 pages, 6530 KiB  
Article
Deep-Learning-Based Natural Ventilation Rate Prediction with Auxiliary Data in Mismeasurement Sensing Environments
by Subhin Yang, Mintai Kim and Sungju Lee
Electronics 2023, 12(15), 3294; https://doi.org/10.3390/electronics12153294 - 31 Jul 2023
Viewed by 809
Abstract
Predicting the amount of natural ventilation by utilizing environmental data such as differential pressure, wind, temperature, and humidity with IoT sensing is an important issue for optimal HVAC control to maintain comfortable air quality. Recently, some research has been conducted using deep learning [...] Read more.
Predicting the amount of natural ventilation by utilizing environmental data such as differential pressure, wind, temperature, and humidity with IoT sensing is an important issue for optimal HVAC control to maintain comfortable air quality. Recently, some research has been conducted using deep learning to provide high accuracy in natural ventilation prediction. Therefore, high reliability of IoT sensing data is required to achieve predictions successfully. However, it is practically difficult to predict the accurate NVR in a mismeasurement sensing environment, since inaccurate IoT sensing data are collected, for example, due to sensor malfunction. Therefore, we need a way to provide high deep-learning-based NVR prediction accuracy in mismeasurement sensing environments. In this study, to overcome the degradation of accuracy due to mismeasurement, we use complementary auxiliary data generated by semi-supervised learning and selected by importance analysis. That is, the NVR prediction model is reliably trained by generating and selecting auxiliary data, and then the natural ventilation is predicted with the integration of mismeasurement and auxiliary by bagging-based ensemble approach. Based on the experimental results, we confirmed that the proposed method improved the natural ventilation rate prediction accuracy by 25% compared with the baseline approach. In the context of deep-learning-based natural ventilation prediction using various IoT sensing data, we address the issue of realistic mismeasurement by generating auxiliary data that utilize the rapidly changing or slowly changing characteristics of the sensing data, which can improve the reliability of observation data. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

20 pages, 3362 KiB  
Article
Autonomous Drone Electronics Amplified with Pontryagin-Based Optimization
by Jiahao Xu and Timothy Sands
Electronics 2023, 12(11), 2541; https://doi.org/10.3390/electronics12112541 - 05 Jun 2023
Cited by 3 | Viewed by 1088
Abstract
In the era of electrification and artificial intelligence, direct current motors are widely utilized with numerous innovative adaptive and learning methods. Traditional methods utilize model-based algebraic techniques with system identification, such as recursive least squares, extended least squares, and autoregressive moving averages. The [...] Read more.
In the era of electrification and artificial intelligence, direct current motors are widely utilized with numerous innovative adaptive and learning methods. Traditional methods utilize model-based algebraic techniques with system identification, such as recursive least squares, extended least squares, and autoregressive moving averages. The new method known as deterministic artificial intelligence employs physical-based process dynamics to achieve target trajectory tracking. There are two common autonomous trajectory-generation algorithms: sinusoidal function- and Pontryagin-based generation algorithms. The Pontryagin-based optimal trajectory with deterministic artificial intelligence for DC motors is proposed and its performance compared for the first time in this paper. This paper aims to simulate model following and deterministic artificial intelligence methods using the sinusoidal and Pontryagin methods and to compare the differences in their performance when following the challenging step function slew maneuver. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

14 pages, 374 KiB  
Article
Intelligent Detection of Cryptographic Misuse in Android Applications Based on Program Slicing andTransformer-Based Classifier
by Lizhen Wang, Jizhi Wang, Tongtong Sui, Lingrui Kong and Yue Zhao
Electronics 2023, 12(11), 2460; https://doi.org/10.3390/electronics12112460 - 30 May 2023
Viewed by 1100
Abstract
The utilization of cryptography in applications has assumed paramount importance with the escalating security standards for Android applications. The adept utilization of cryptographic APIs can significantly enhance application security; however, in practice, software developers frequently misuse these APIs due to their inadequate grasp [...] Read more.
The utilization of cryptography in applications has assumed paramount importance with the escalating security standards for Android applications. The adept utilization of cryptographic APIs can significantly enhance application security; however, in practice, software developers frequently misuse these APIs due to their inadequate grasp of cryptography. A study reveals that a staggering 88% of Android applications exhibit some form of cryptographic misuse. Although certain tools have been proposed to detect such misuse, most of them rely on manually devised rules which are susceptible to errors and require researchers possessing an exhaustive comprehension of cryptography. In this study, we propose a research methodology founded on a neural network model to pinpoint code related to cryptography by employing program slices as a dataset. We subsequently employ active learning, rooted in clustering, to select the portion of the data harboring security issues for annotation in accordance with the Android cryptography usage guidelines. Ultimately, we feed the dataset into a transformer and multilayer perceptron (MLP) to derive the classification outcome. Comparative experiments are also conducted to assess the model’s efficacy in comparison to other existing approaches. Furthermore, planned combination tests utilizing supplementary techniques aim to validate the model’s generalizability. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

15 pages, 4882 KiB  
Article
Design of Enhanced Document HTML and the Reliable Electronic Document Distribution Service
by Hyun-Cheon Hwang and Woo-Je Kim
Electronics 2023, 12(10), 2176; https://doi.org/10.3390/electronics12102176 - 10 May 2023
Viewed by 982
Abstract
Electronic documents are becoming increasingly popular in various industries and sectors as they provide greater convenience and cost-efficiency than physical documents. PDF is a widely used format for creating and sharing electronic documents, while HTML is commonly used in mobile environments as the [...] Read more.
Electronic documents are becoming increasingly popular in various industries and sectors as they provide greater convenience and cost-efficiency than physical documents. PDF is a widely used format for creating and sharing electronic documents, while HTML is commonly used in mobile environments as the foundation for creating web pages displayed on mobile devices, such as smartphones and tablets. HTML is becoming a more critical document format as mobile environments have been raised as the primary communication channel nowadays. However, HTML does not have the standard content integrity feature, and an electronic document based on HTML consists of a set of related files. Therefore, it has a vulnerability in terms of reliable electronic documents. We have proposed Document HTML, a single independent file with extended meta tags, to be a reliable electronic document and Chained Document, a single independent file with a blockchain network to secure content integrity and delivery assurance. In this paper, we improved the definition of Document HTML and researched certified electronic document intermediaries. Additionally, we designed and validated the electronic document distribution service using Enhanced Document HTML for real usability. Moreover, we conducted experimental verification using a tax notification electronic document, which has one of the top distribution volumes in Korea, to confirm how Document HTML provides a content integrity verification feature. Document HTML can be used in an enterprise that must send a reliable electronic document to a customer with an electronic document delivery service provider. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

13 pages, 444 KiB  
Article
BSTC: A Fake Review Detection Model Based on a Pre-Trained Language Model and Convolutional Neural Network
by Junwen Lu, Xintao Zhan, Guanfeng Liu, Xinrong Zhan and Xiaolong Deng
Electronics 2023, 12(10), 2165; https://doi.org/10.3390/electronics12102165 - 09 May 2023
Cited by 2 | Viewed by 2774
Abstract
Detecting fake reviews can help customers make better purchasing decisions and maintain a positive online business environment. In recent years, pre-trained language models have significantly improved the performance of natural language processing tasks. These models are able to generate different representation vectors for [...] Read more.
Detecting fake reviews can help customers make better purchasing decisions and maintain a positive online business environment. In recent years, pre-trained language models have significantly improved the performance of natural language processing tasks. These models are able to generate different representation vectors for each word in different contexts, thus solving the challenge of multiple meanings of a word, which traditional word vector methods such as Word2Vec cannot solve, and, therefore, better capturing the text’s contextual information. In addition, we consider that reviews generally contain rich opinion and sentiment expressions, while most pre-trained language models, including BERT, lack the consideration of sentiment knowledge in the pre-training stage. Based on the above considerations, we propose a new fake review detection model based on a pre-trained language model and convolutional neural network, which is called BSTC. BSTC considers BERT, SKEP, and TextCNN, where SKEP is a pre-trained language model based on sentiment knowledge enhancement. We conducted a series of experiments on three gold-standard datasets, and the findings illustrate that BSTC outperforms state-of-the-art methods in detecting fake reviews. It achieved the highest accuracy on all three gold-standard datasets—Hotel, Restaurant, and Doctor—with 93.44%, 91.25%, and 92.86%, respectively. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

24 pages, 669 KiB  
Article
Complement Recognition-Based Formal Concept Analysis for Automatic Extraction of Interpretable Concept Taxonomies from Text
by Stefano Ferilli
Electronics 2023, 12(9), 2137; https://doi.org/10.3390/electronics12092137 - 07 May 2023
Viewed by 1046
Abstract
The increasing scale and pace of the production of digital documents have generated a need for automatic tools to analyze documents and extract underlying concepts and knowledge in order to help humans manage information overload. Specifically, since most information comes in the form [...] Read more.
The increasing scale and pace of the production of digital documents have generated a need for automatic tools to analyze documents and extract underlying concepts and knowledge in order to help humans manage information overload. Specifically, since most information comes in the form of text, natural language processing tools are needed that are able to analyze the sentences and transform them into an internal representation that can be handled by computers to perform inferences and reasoning. In turn, these tools often work based on linguistic resources for the various levels of analysis (morphological, lexical, syntactic and semantic). The resources are language (and sometimes even domain) specific and typically must be manually produced by human experts, increasing their cost and limiting their availability. Especially relevant are concept taxonomies, which allow us to properly interpret the textual content of documents. This paper presents an intelligent module to extract relevant domain knowledge from free text by means of Concept Hierarchy Extraction techniques. In particular, the underlying model is provided using Formal Concept Analysis, while a crucial role is played by an expert system for language analysis that can recognize different types of indirect objects (a component very rich in information) in English. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 746 KiB  
Article
MazeGen: A Low-Code Framework for Bootstrapping Robotic Navigation Scenarios for Smart Manufacturing Contexts
by Ivan Hugo Guevara and Tiziana Margaria
Electronics 2023, 12(9), 2058; https://doi.org/10.3390/electronics12092058 - 29 Apr 2023
Viewed by 937
Abstract
In this research, we describe the MazeGen framework (as a maze generator), which generates navigation scenarios using Grammatical Evolution for robots or drones to navigate. The maze generator uses evolutionary algorithms to create robotic navigation scenarios with different semantic levels along a scenario [...] Read more.
In this research, we describe the MazeGen framework (as a maze generator), which generates navigation scenarios using Grammatical Evolution for robots or drones to navigate. The maze generator uses evolutionary algorithms to create robotic navigation scenarios with different semantic levels along a scenario profile. Grammatical Evolution is a Machine Learning technique from the Evolutionary Computing branch that uses a BNF grammar to describe the language of the possible scenario universe and a numerical encoding of individual scenarios along that grammar. Through a mapping process, it converts new numerical individuals obtained by operations on the parents’ encodings to a new solution by means of grammar. In this context, the grammar describes the scenario elements and some composition rules. We also analyze associated concepts of complexity, understanding complexity as the cost of production of the scenario and skill levels needed to move around the maze. Preliminary results and statistics evidence a low correlation between complexity and the number of obstacles placed, as configurations with more difficult obstacle dispositions were found in the early stages of the evolution process and also when analyzing mazes taking into account their semantic meaning, earlier versions of the experiment not only resulted as too simplistic for the Smart Manufacturing domain, but also lacked correlation with possible real-world scenarios, as was evidenced in our experiments, where the most semantic meaning results had the lowest fitness score. They also show the emerging technology status of this approach, as we still need to find out how to reliably find solvable scenarios and characterize those belonging to the same class of equivalence. Despite being an emerging technology, MazeGen allows users to simplify the process of building configurations for smart manufacturing environments, by making it faster, more efficient, and reproducible, and it also puts the non-expert programmer in the center of the development process, as little boilerplate code is needed. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

24 pages, 800 KiB  
Article
Clustered Federated Learning Based on Momentum Gradient Descent for Heterogeneous Data
by Xiaoyi Zhao, Ping Xie, Ling Xing, Gaoyuan Zhang and Huahong Ma
Electronics 2023, 12(9), 1972; https://doi.org/10.3390/electronics12091972 - 24 Apr 2023
Cited by 1 | Viewed by 1242
Abstract
Data heterogeneity may significantly deteriorate the performance of federated learning since the client’s data distribution is divergent. To mitigate this issue, an effective method is to partition these clients into suitable clusters. However, existing clustered federated learning is only based on the gradient [...] Read more.
Data heterogeneity may significantly deteriorate the performance of federated learning since the client’s data distribution is divergent. To mitigate this issue, an effective method is to partition these clients into suitable clusters. However, existing clustered federated learning is only based on the gradient descent method, which leads to poor convergence performance. To accelerate the convergence rate, this paper proposes clustered federated learning based on momentum gradient descent (CFL-MGD) by integrating momentum and cluster techniques. In CFL-MGD, scattered clients are partitioned into the same cluster when they have the same learning tasks. Meanwhile, each client in the same cluster utilizes their own private data to update local model parameters through the momentum gradient descent. Moreover, we present gradient averaging and model averaging for global aggregation, respectively. To understand the proposed algorithm, we also prove that CFL-MGD converges at an exponential rate for smooth and strongly convex loss functions. Finally, we validate the effectiveness of CFL-MGD on CIFAR-10 and MNIST datasets. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

21 pages, 993 KiB  
Article
Activity Recognition in Smart Homes via Feature-Rich Visual Extraction of Locomotion Traces
by Samaneh Zolfaghari, Silvia M. Massa and Daniele Riboni
Electronics 2023, 12(9), 1969; https://doi.org/10.3390/electronics12091969 - 24 Apr 2023
Cited by 3 | Viewed by 1534
Abstract
The proliferation of sensors in smart homes makes it possible to monitor human activities, routines, and complex behaviors in an unprecedented way. Hence, human activity recognition has gained increasing attention over the last few years as a tool to improve healthcare and well-being [...] Read more.
The proliferation of sensors in smart homes makes it possible to monitor human activities, routines, and complex behaviors in an unprecedented way. Hence, human activity recognition has gained increasing attention over the last few years as a tool to improve healthcare and well-being in several applications. However, most existing activity recognition systems rely on cameras or wearable sensors, which may be obtrusive and may invade the user’s privacy, especially at home. Moreover, extracting expressive features from a stream of data provided by heterogeneous smart-home sensors is still an open challenge. In this paper, we investigate a novel method to detect activities of daily living by exploiting unobtrusive smart-home sensors (i.e., passive infrared position sensors and sensors attached to everyday objects) and vision-based deep learning algorithms, without the use of cameras or wearable sensors. Our method relies on depicting the locomotion traces of the user and visual clues about their interaction with objects on a floor plan map of the home, and utilizes pre-trained deep convolutional neural networks to extract features for recognizing ongoing activity. One additional advantage of our method is its seamless extendibility with additional features based on the available sensor data. Extensive experiments with a real-world dataset and a comparison with state-of-the-art approaches demonstrate the effectiveness of our method. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

27 pages, 517 KiB  
Article
A Secure and Anonymous Authentication Protocol Based on Three-Factor Wireless Medical Sensor Networks
by JoonYoung Lee, Jihyeon Oh and Youngho Park
Electronics 2023, 12(6), 1368; https://doi.org/10.3390/electronics12061368 - 13 Mar 2023
Cited by 5 | Viewed by 1468
Abstract
Wireless medical sensor networks (WMSNs), a type of wireless sensor network (WSN), have enabled medical professionals to identify patients’ health information in real time to identify and diagnose their conditions. However, since wireless communication is performed through an open channel, an attacker can [...] Read more.
Wireless medical sensor networks (WMSNs), a type of wireless sensor network (WSN), have enabled medical professionals to identify patients’ health information in real time to identify and diagnose their conditions. However, since wireless communication is performed through an open channel, an attacker can steal or manipulate the transmitted and received information. Because these attacks are directly related to the patients’ lives, it is necessary to prevent these attacks upfront by providing the security of WMSN communication. Although authentication protocols are continuously developed to establish the security of WMSN communication, they are still vulnerable to attacks. Recently, Yuanbing et al. proposed a secure authentication scheme for WMSN. They emphasized that their protocol is able to resist various attacks and can ensure mutual authentication. Unfortunately, this paper demonstrates that Yuanbing et al.’s protocol is vulnerable to smart card stolen attacks, ID/password guessing attacks, and sensor node capture attacks. In order to overcome the weaknesses and effectiveness of existing studies and to ensure secure communication and user anonymity of WMSN, we propose a secure and anonymous authentication protocol. The proposed protocol can prevent sensor capture, guessing, and man-in-the-middle attacks. To demonstrate the security of the proposed protocol, we perform various formal and informal analyses using AVISPA tools, ROR models, and BAN logic. Additionally, we compare the security aspects with related protocols to prove that the proposed protocol has excellent security. We also prove the effectiveness of our proposed protocol compared with related protocols in computation and communication costs. Our protocol has low or comparable computation and communication costs compared to related protocols. Thus, our protocol can provide services in the WMSN environment. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

14 pages, 485 KiB  
Article
Improving the Performance of Open-Set Recognition with Generated Fake Data
by András Pál Halász, Nawar Al Hemeary, Lóránt Szabolcs Daubner, Tamás Zsedrovits and Kálmán Tornai
Electronics 2023, 12(6), 1311; https://doi.org/10.3390/electronics12061311 - 09 Mar 2023
Cited by 1 | Viewed by 1047
Abstract
Open-set recognition models, in addition to generalizing to unseen instances of known categories, have to identify samples of unknown classes during the training phase. The main reason the latter is much more complicated is that there is very little or no information about [...] Read more.
Open-set recognition models, in addition to generalizing to unseen instances of known categories, have to identify samples of unknown classes during the training phase. The main reason the latter is much more complicated is that there is very little or no information about the properties of these unknown classes. There are methodologies available to handle the unknowns. One possible method is to construct models for them by using generated inputs labeled as unknown. Generative adversarial networks are frequently deployed to generate synthetic samples representing unknown classes to create better models for known classes. In this paper, we introduce a novel approach to improve the accuracy of recognition methods while reducing the time complexity. Instead of generating synthetic input data to train neural networks, feature vectors are generated using the output of a hidden layer. This approach results in a less complex structure for the neural network representation of the classes. A distance-based classifier implemented by a convolutional neural network is used in our implementation. Our solution’s open-set detection performance reaches an AUC value of 0.839 on the CIFAR-10 dataset, while the closed-set accuracy is 91.4%, the highest among the open-set recognition methods. The generator and discriminator networks are much smaller when generating synthetic inner features. There is no need to run these samples through the first part of the classifier with the convolutional layers. Hence, this solution not only gives better performance than generating samples in the input space but also makes it less expensive in terms of computational complexity. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

17 pages, 2372 KiB  
Article
RESTful API Analysis, Recommendation, and Client Code Retrieval
by Shang-Pin Ma, Ming-Jen Hsu, Hsiao-Jung Chen and Chuan-Jie Lin
Electronics 2023, 12(5), 1252; https://doi.org/10.3390/electronics12051252 - 05 Mar 2023
Viewed by 2193
Abstract
Numerous companies create innovative software systems using Web APIs (Application Programming Interfaces). API search engines and API directory services, such as ProgrammableWeb, Rapid API Hub, APIs.guru, and API Harmony, have been developed to facilitate the utilization of various APIs. Unfortunately, most API systems [...] Read more.
Numerous companies create innovative software systems using Web APIs (Application Programming Interfaces). API search engines and API directory services, such as ProgrammableWeb, Rapid API Hub, APIs.guru, and API Harmony, have been developed to facilitate the utilization of various APIs. Unfortunately, most API systems provide only superficial support, with no assistance in obtaining relevant APIs or examples of code usage. To better realize the “FAIR” (Findability, Accessibility, Interoperability, and Reusability) features for the usage of Web APIs, in this study, we developed an API inspection system (referred to as API Prober) to provide a new API directory service with multiple supplemental functionalities. To facilitate the findability and accessibility of APIs, API Prober transforms OAS (OpenAPI Specifications) into a graph structure and automatically annotates the semantic concepts using LDA (Latent Dirichlet Allocation) and WordNet. To enhance interoperability, API Prober also classifies APIs by clustering OAS documents and recommends alternative services to be substituted or merged with the target service. Finally, to support reusability, API Prober makes it possible to retrieve examples of API utilization code in Java by parsing source code in GitHub. The experimental results demonstrate the effectiveness of the API Prober in recommending relevant services and providing usage examples based on real-world client code. This research contributes to providing viable methods to appropriately analyze and cluster Web APIs, and recommend APIs and client code examples. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

34 pages, 873 KiB  
Article
Applying Social Network Analysis to Model and Handle a Cross-Blockchain Ecosystem
by Gianluca Bonifazi, Francesco Cauteruccio, Enrico Corradini, Michele Marchetti, Domenico Ursino and Luca Virgili
Electronics 2023, 12(5), 1086; https://doi.org/10.3390/electronics12051086 - 22 Feb 2023
Cited by 4 | Viewed by 1436
Abstract
In recent years, the huge growth in the number and variety of blockchains has prompted researchers to investigate the cross-blockchain scenario. In this setting, multiple blockchains coexist, and wallets can exchange data and money from one blockchain to another. The effective and efficient [...] Read more.
In recent years, the huge growth in the number and variety of blockchains has prompted researchers to investigate the cross-blockchain scenario. In this setting, multiple blockchains coexist, and wallets can exchange data and money from one blockchain to another. The effective and efficient management of a cross-blockchain ecosystem is an open problem. This paper aims to address it by exploiting the potential of Social Network Analysis. This general objective is declined into a set of activities. First, a social network-based model is proposed to represent such a scenario. Then, a multi-dimensional and multi-view framework is presented, which uses such a model to handle a cross-blockchain scenario. Such a framework allows all the results found in the past research on Social Network Analysis to be applied to the cross-blockchain ecosystem. Afterwards, this framework is used to extract insights and knowledge patterns concerning the behavior of several categories of wallets in a cross-blockchain scenario. To verify the goodness of the proposed framework, it is applied on a real dataset derived from Multichain, in order to identify various user categories and their “modus operandi”. Finally, a new centrality measure is proposed, which identifies the most significant wallets in the ecosytem. This measure considers several viewpoints, each of which addresses a specific aspect that may make a wallet more or less central in the cross-blockchain scenario. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 1042 KiB  
Article
An Empirical Study of Segmented Linear Regression Search in LevelDB
by Agung Rahmat Ramadhan, Min-guk Choi, Yoojin Chung and Jongmoo Choi
Electronics 2023, 12(4), 1018; https://doi.org/10.3390/electronics12041018 - 17 Feb 2023
Cited by 4 | Viewed by 1648
Abstract
The purpose of this paper is proposing a novel search mechanism, called SLR (Segmented Linear Regression) search, based on the concept of learned index. It is motivated by our observation that a lot of big data, collected and used by previous studies, have [...] Read more.
The purpose of this paper is proposing a novel search mechanism, called SLR (Segmented Linear Regression) search, based on the concept of learned index. It is motivated by our observation that a lot of big data, collected and used by previous studies, have a linearity property, meaning that keys and their stored locations show a strong linear correlation. This observation leads us to design SLR search where we apply segmentation into the well-known machine learning algorithm, linear regression, for identifying a location from a given key. We devise two segmentation techniques, equal-size and error-aware, with the consideration of both prediction accuracy and segmentation overhead. We implement our proposal in LevelDB, Google’s key-value store, and verify that it can improve search performance by up to 12.7%. In addition, we find that the equal-size technique provides efficiency in training while the error-aware one is tolerable to noisy data. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

14 pages, 2091 KiB  
Article
2D Camera-Based Air-Writing Recognition Using Hand Pose Estimation and Hybrid Deep Learning Model
by Taiki Watanabe, Md. Maniruzzaman, Md. Al Mehedi Hasan, Hyoun-Sup Lee, Si-Woong Jang and Jungpil Shin
Electronics 2023, 12(4), 995; https://doi.org/10.3390/electronics12040995 - 16 Feb 2023
Cited by 2 | Viewed by 2849
Abstract
Air-writing is a modern human–computer interaction technology that allows participants to write words or letters with finger or hand movements in free space in a simple and intuitive manner. Air-writing recognition is a particular case of gesture recognition in which gestures can be [...] Read more.
Air-writing is a modern human–computer interaction technology that allows participants to write words or letters with finger or hand movements in free space in a simple and intuitive manner. Air-writing recognition is a particular case of gesture recognition in which gestures can be matched to write characters and digits in the air. Air-written characters show extensive variations depending on the various writing styles of participants and their speed of articulation, which presents quite a difficult task for effective character recognition. In order to solve these difficulties, this current work proposes an air-writing system using a web camera. The proposed system consists of two parts: alphabetic recognition and digit recognition. In order to assess our proposed system, two character datasets were used: an alphabetic dataset and a numeric dataset. We collected samples from 17 participants and asked each participant to write alphabetic characters (A to Z) and numeric digits (0 to 9) about 5–10 times. At the same time, we recorded the position of the fingertips using MediaPipe. As a result, we collected 3166 samples for the alphabetic dataset and 1212 samples for the digit dataset. First, we preprocessed the dataset and then created two datasets: image data and padding sequential data. The image data were fed into the convolution neural networks (CNN) model, whereas the sequential data were fed into bidirectional long short-term memory (BiLSTM). After that, we combined these two models and trained again with 5-fold cross-validation in order to increase the character recognition accuracy. In this work, this combined model is referred to as a hybrid deep learning model. Finally, the experimental results showed that our proposed system achieved an alphabet recognition accuracy of 99.3% and a digit recognition accuracy of 99.5%. We also validated our proposed system using another publicly available 6DMG dataset. Our proposed system provided better recognition accuracy compared to the existing system. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

19 pages, 1995 KiB  
Article
Experimental Analysis of Security Attacks for Docker Container Communications
by Haneul Lee, Soonhong Kwon and Jong-Hyouk Lee
Electronics 2023, 12(4), 940; https://doi.org/10.3390/electronics12040940 - 13 Feb 2023
Cited by 3 | Viewed by 3192
Abstract
Docker has become widely used as an open-source platform for packaging and running applications as containers. It is in the limelight especially at companies and IT developers that provide cloud services thanks to its advantages such as the portability of applications and being [...] Read more.
Docker has become widely used as an open-source platform for packaging and running applications as containers. It is in the limelight especially at companies and IT developers that provide cloud services thanks to its advantages such as the portability of applications and being lightweight. Docker provides communication between multiple containers through internal network configuration, which makes it easier to configure various services by logically connecting containers to each other, but cyberattacks exploiting the vulnerabilities of the Docker container network, e.g., distributed denial of service (DDoS) and cryptocurrency mining attacks, have recently occurred. In this paper, we experiment with cyberattacks such as ARP spoofing, DDoS, and elevation of privilege attacks to show how attackers can execute various attacks and analyze the results in terms of network traffic, CPU consumption, and malicious reverse shell execution. In addition, by examining the attacks from the network perspective of the Docker container environment, we lay the groundwork for detecting and preventing lateral movement attacks that may occur between the Docker containers. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

11 pages, 808 KiB  
Article
Comparison of Deep Learning Models for Automatic Detection of Sarcasm Context on the MUStARD Dataset
by Alexandru-Costin Băroiu and Ștefan Trăușan-Matu
Electronics 2023, 12(3), 666; https://doi.org/10.3390/electronics12030666 - 29 Jan 2023
Cited by 5 | Viewed by 1814
Abstract
Sentiment analysis is a major area of natural language processing (NLP) research, and its sub-area of sarcasm detection has received growing interest in the past decade. Many approaches have been proposed, from basic machine learning to multi-modal deep learning solutions, and progress has [...] Read more.
Sentiment analysis is a major area of natural language processing (NLP) research, and its sub-area of sarcasm detection has received growing interest in the past decade. Many approaches have been proposed, from basic machine learning to multi-modal deep learning solutions, and progress has been made. Context has proven to be instrumental for sarcasm and many techniques that use context to identify sarcasm have emerged. However, no NLP research has focused on sarcasm-context detection as the main topic. Therefore, this paper proposes an approach for the automatic detection of sarcasm context, aiming to develop models that can correctly identify the contexts in which sarcasm may occur or is appropriate. Using an established dataset, MUStARD, multiple models are trained and benchmarked to find the best performer for sarcasm-context detection. This performer is proven to be an attention-based long short-term memory architecture that achieves an F1 score of 60.1. Furthermore, we tested the performance of this model on the SARC dataset and compared it with other results reported in the literature to better assess the effectiveness of this approach. Future directions of study are opened, with the prospect of developing a conversational agent that could identify and even respond to sarcasm. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 406 KiB  
Article
Digital Service Platform and Innovation in Healthcare: Measuring Users’ Satisfaction and Implications
by Fotis Kitsios, Stavros Stefanakakis, Maria Kamariotou and Lambros Dermentzoglou
Electronics 2023, 12(3), 662; https://doi.org/10.3390/electronics12030662 - 28 Jan 2023
Cited by 2 | Viewed by 2104
Abstract
When it comes to scheduling health consultations, e-appointment systems are helpful for patients. Non-attendance is a common obstacle that many medical practitioners must endure when it comes to the management of appointments in healthcare facilities and outpatient health settings. Prior surveys have found [...] Read more.
When it comes to scheduling health consultations, e-appointment systems are helpful for patients. Non-attendance is a common obstacle that many medical practitioners must endure when it comes to the management of appointments in healthcare facilities and outpatient health settings. Prior surveys have found that many users are open to use such mechanisms and that patients would be likely to schedule an online appointment with their doctor if such a system was made accessible. Few studies have sought to determine how well e-appointment systems work, how well they are received by their users, and whether or not they increase the number of appointments booked. The purpose of this research was to collect information that would help executives of a state hospital in Thessaloniki, Greece, to improve their electronic appointment system by measuring the level of satisfaction their patients have with it. The results show that the level of service provided by the electronic appointment system is not satisfactory. The quality of the website is another significant factor that does not contribute to the level of satisfaction experienced by patients. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

17 pages, 721 KiB  
Article
Dual-Channel Edge-Featured Graph Attention Networks for Aspect-Based Sentiment Analysis
by Junwen Lu, Lihui Shi, Guanfeng Liu and Xinrong Zhan
Electronics 2023, 12(3), 624; https://doi.org/10.3390/electronics12030624 - 26 Jan 2023
Cited by 3 | Viewed by 1504
Abstract
The goal of aspect-based sentiment analysis (ABSA) is to identify the sentiment polarity of specific aspects in a context. Recently, graph neural networks have employed dependent tree syntactic information to assess the link between aspects and contextual words; nevertheless, most of this research [...] Read more.
The goal of aspect-based sentiment analysis (ABSA) is to identify the sentiment polarity of specific aspects in a context. Recently, graph neural networks have employed dependent tree syntactic information to assess the link between aspects and contextual words; nevertheless, most of this research has neglected phrases that are insensitive to syntactic analysis and the effect between various aspects in a sentence. In this paper, we propose a dual-channel edge-featured graph attention networks model (AS-EGAT), which builds an aspect syntactic graph by enhancing the contextual syntactic dependency representation of key aspect words and the mutual affective relationship between various aspects in the context and builds a semantic graph through the self-attention mechanism. We use the edge features as a significant factor to determine the weight coefficient of the attention mechanism to efficiently mine the edge features of the graph attention networks model (GAT). As a result, the model can connect important sentiment features of related aspects when dealing with aspects that lack obvious sentiment expressions, pay close attention to important word aspects when dealing with multiple-word aspects, and extract sentiment features from sentences that are not sensitive to syntactic dependency trees by looking at semantic features. Experimental results show that our proposed AS-EGAT model is superior to the current state-of-the-art baselines. Compared with the baseline models of LAP14, REST15, REST16, MAMS, T-shirt, and Television datasets, the accuracy of our AS-EGAT model increased by 0.76%, 0.29%, 0.05%, 0.15%, 0.22%, and 0.38%, respectively. The macro-f1 score increased by 1.16%, 1.16%, 1.23%, 0.37%, 0.53%, and 1.93% respectively. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 683 KiB  
Article
DNN-Based Forensic Watermark Tracking System for Realistic Content Copyright Protection
by Jaehyoung Park, Jihye Kim, Jiyou Seo, Sangpil Kim and Jong-Hyouk Lee
Electronics 2023, 12(3), 553; https://doi.org/10.3390/electronics12030553 - 20 Jan 2023
Cited by 1 | Viewed by 1954
Abstract
The metaverse-related content market is active and the demand for immersive content is increasing. However, there is no definition for granting copyrights to the content produced using artificial intelligence and discussions are still ongoing. We expect that the need for copyright protection for [...] Read more.
The metaverse-related content market is active and the demand for immersive content is increasing. However, there is no definition for granting copyrights to the content produced using artificial intelligence and discussions are still ongoing. We expect that the need for copyright protection for immersive content used in the metaverse environment will emerge and that related copyright protection techniques will be required. In this paper, we present the idea of 3D-to-2D watermarking so that content creators can protect the copyright of immersive content available in the metaverse environment. We propose an immersive content copyright protection using a deep neural network (DNN), a neural network composed of multiple hidden layers, and a forensic watermark. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 3765 KiB  
Article
Software Development for Processing and Analysis of Data Generated by Human Eye Movements
by Radoslava Kraleva and Velin Kralev
Electronics 2023, 12(3), 485; https://doi.org/10.3390/electronics12030485 - 17 Jan 2023
Viewed by 1425
Abstract
This research focuses on a software application providing opportunities for the processing and analysis of data generated by a saccade sensor with human eye movements. The main functional opportunities of the developed application are presented as well. According to the methodology of the [...] Read more.
This research focuses on a software application providing opportunities for the processing and analysis of data generated by a saccade sensor with human eye movements. The main functional opportunities of the developed application are presented as well. According to the methodology of the experiments, three experiments were prepared. The first was related to visualization of the stimuli on a stimulation computer display that was integrated into the developed application as a separate module. The second experiment was related to an interactive visualization of the projection of the eye movement of the participants in the experiment onto the stimulation computer display. The third experiment was related to an analysis of aggregated data on the decision time and the number of correct responses given by the participants to visual tasks. The tests showed that the application can be used as a stimulation center to visualize the stimuli and to recreate the experimental sessions. The summary of the results led to the conclusion that the number of correct responses to the visual tasks depended both on the type of motion of the stimuli and on the size of displacement from the center of the aperture. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 3839 KiB  
Article
Handwritten Numeral Recognition Integrating Start–End Points Measure with Convolutional Neural Network
by M. A. H. Akhand, Md. Rahat-Uz-Zaman, Shadmaan Hye and Md Abdus Samad Kamal
Electronics 2023, 12(2), 472; https://doi.org/10.3390/electronics12020472 - 16 Jan 2023
Cited by 1 | Viewed by 1883
Abstract
Convolutional neural network (CNN) based methods have succeeded for handwritten numeral recognition (HNR) applications. However, CNN seems to misclassify similarly shaped numerals (i.e., the silhouette of the numerals that look the same). This paper presents an enhanced HNR system to improve the classification [...] Read more.
Convolutional neural network (CNN) based methods have succeeded for handwritten numeral recognition (HNR) applications. However, CNN seems to misclassify similarly shaped numerals (i.e., the silhouette of the numerals that look the same). This paper presents an enhanced HNR system to improve the classification accuracy of the similarly shaped handwritten numerals incorporating the terminals points with CNN’s recognition, which can be utilized in various emerging applications related to language translation. In handwritten numerals, the terminal points (i.e., the start and end positions) are considered additional properties to discriminate between similarly shaped numerals. Start–End Writing Measure (SEWM) and its integration with CNN is the main contribution of this research. Traditionally, the classification outcome of a CNN-based system is considered according to the highest probability exposed for a particular numeral category. In the proposed system, along with such classification, its probability value (i.e., CNN’s confidence level) is also used as a regulating element. Parallel to CNN’s classification operation, SEWM measures the start-end points of the numeral image, suggesting the numeral category for which measured start-end points are found close to reference start-end points of the numeral class. Finally, the output label or system’s classification of the given numeral image is provided by comparing the confidence level with a predefined threshold value. SEWM-CNN is a suitable HNR method for Bengali and Devanagari numerals compared with other existing methods. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

12 pages, 364 KiB  
Article
Information Systems Strategy and Security Policy: A Conceptual Framework
by Maria Kamariotou and Fotis Kitsios
Electronics 2023, 12(2), 382; https://doi.org/10.3390/electronics12020382 - 11 Jan 2023
Cited by 2 | Viewed by 3108
Abstract
As technology evolves, businesses face new threats and opportunities in the areas of information and information assets. These areas include information creation, refining, storage, and dissemination. Governments and other organizations around the world have begun prioritizing the protection of cyberspace as a pressing [...] Read more.
As technology evolves, businesses face new threats and opportunities in the areas of information and information assets. These areas include information creation, refining, storage, and dissemination. Governments and other organizations around the world have begun prioritizing the protection of cyberspace as a pressing international issue, prompting a renewed emphasis on information security strategy development and implementation. While every nation’s information security strategy is crucial, there has not been much work conducted to define a method for gauging national cybersecurity attitudes that takes into account factors and indicators that are specific to that nation. In order to develop a framework that incorporates issues based on the current research in this area, this paper will examine the fundamentals of the information security strategy and the factors that affect its integration. This paper contributes by providing a model based on the ITU cybersecurity decisions, with the goal of developing a roadmap for the successful development and implementation of the National Cybersecurity Strategy in Greece, as well as identifying the factors at the national level that may be aligned with a country’s cybersecurity level. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

13 pages, 2017 KiB  
Article
Symmetrical Hardware-Software Design for Improving Physical Activity with a Gamified Music Step Sensor Box
by Bilal Ahmed, Handityo Aulia Putra, Seongwook Kim and Choongjae Im
Electronics 2023, 12(2), 368; https://doi.org/10.3390/electronics12020368 - 11 Jan 2023
Cited by 1 | Viewed by 2029
Abstract
Physical inactivity, the fourth leading cause of death worldwide, can harm the economy, national growth, community welfare, health, and quality of life. On the other hand, physical activities (PA) have numerous advantages, including fewer cardiovascular diseases, cancer, and diabetes, fewer psychological disorders, and [...] Read more.
Physical inactivity, the fourth leading cause of death worldwide, can harm the economy, national growth, community welfare, health, and quality of life. On the other hand, physical activities (PA) have numerous advantages, including fewer cardiovascular diseases, cancer, and diabetes, fewer psychological disorders, and improved cognitive abilities. Despite the benefits of PA, people are less likely to participate. The main factor is a lack of entertainment in exercise, which demotivates society from engaging in healthy activities. In this work, we proposed a hardware-software symmetry that can entertain people while performing PA. We developed a step-box with sensors and a gamified music application synchronized with the footsteps. The purpose of this study is to show that incorporating appropriate gamification allows participants to engage actively in tedious and economic exercises. Participants (N = 90) participated in 20-min daily exercise sessions for three days. A 5-point Likert scale was used to assess efficiency, effectiveness, and satisfaction following exercise sessions. The results show that the gamified sensor step-box increased efficiency, effectiveness, and participant satisfaction. The findings suggest that gamification fundamentals in simple exercises increase excitement and may help people to maintain PA. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

19 pages, 2120 KiB  
Article
Multi-Vehicle Trajectory Tracking towards Digital Twin Intersections for Internet of Vehicles
by Zhanhao Ji, Guojiang Shen, Juntao Wang, Mario Collotta, Zhi Liu and Xiangjie Kong
Electronics 2023, 12(2), 275; https://doi.org/10.3390/electronics12020275 - 05 Jan 2023
Cited by 7 | Viewed by 1428
Abstract
Digital Twin (DT) provides a novel idea for Intelligent Transportation Systems (ITS), while Internet of Vehicles (IoV) provides numerous positioning data of vehicles. However, complex interactions between vehicles as well as offset and loss of measurements can lead to tracking errors of DT [...] Read more.
Digital Twin (DT) provides a novel idea for Intelligent Transportation Systems (ITS), while Internet of Vehicles (IoV) provides numerous positioning data of vehicles. However, complex interactions between vehicles as well as offset and loss of measurements can lead to tracking errors of DT trajectories. In this paper, we propose a multi-vehicle trajectory tracking framework towards DT intersections (MVT2DTI). Firstly, the positioning data is unified to the same coordinate system and associated with the tracked trajectories via matching. Secondly, a spatial–temporal tracker (STT) utilizes long short-term memory network (LSTM) and graph attention network (GAT) to extract spatial–temporal features for state prediction. Then, the distance matrix is computed as a proposed tracking loss that feeds tracking errors back to the tracker. Through the iteration of association and prediction, the unlabeled coordinates are connected into the DT trajectories. Finally, four datasets are generated to validate the effectiveness and efficiency of the framework. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

24 pages, 481 KiB  
Article
GEAR: A General Inference Engine for Automated MultiStrategy Reasoning
by Stefano Ferilli
Electronics 2023, 12(2), 256; https://doi.org/10.3390/electronics12020256 - 04 Jan 2023
Cited by 7 | Viewed by 1379
Abstract
The pervasive use of AI today caused an urgent need for human-compliant AI approaches and solutions that can explain their behavior and decisions in human-understandable terms, especially in critical domains, so as to enforce trustworthiness and support accountability. The symbolic/logic approach to AI [...] Read more.
The pervasive use of AI today caused an urgent need for human-compliant AI approaches and solutions that can explain their behavior and decisions in human-understandable terms, especially in critical domains, so as to enforce trustworthiness and support accountability. The symbolic/logic approach to AI supports this need because it aims at reproducing human reasoning mechanisms. While much research has been carried out on single inference strategies, an overall approach to combine them is still missing. This paper claims the need for a new overall approach that merges all the single strategies, named MultiStrategy Reasoning. Based on an analysis of research on automated inference in AI, it selects a suitable setting for this approach, reviews the most promising approaches proposed for single inference strategies, and proposes a possible combination of deduction, abduction, abstraction, induction, argumentation, uncertainty and analogy. It also introduces the GEAR (General Engine for Automated Reasoning) inference engine, that has been developed to implement this vision. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 1446 KiB  
Article
Hausdorff Distance and Similarity Measures for Single-Valued Neutrosophic Sets with Application in Multi-Criteria Decision Making
by Mehboob Ali, Zahid Hussain and Miin-Shen Yang
Electronics 2023, 12(1), 201; https://doi.org/10.3390/electronics12010201 - 31 Dec 2022
Cited by 5 | Viewed by 1763
Abstract
Hausdorff distance is one of the important distance measures to study the degree of dissimilarity between two sets that had been used in various fields under fuzzy environments. Among those, the framework of single-valued neutrosophic sets (SVNSs) is the one that has more [...] Read more.
Hausdorff distance is one of the important distance measures to study the degree of dissimilarity between two sets that had been used in various fields under fuzzy environments. Among those, the framework of single-valued neutrosophic sets (SVNSs) is the one that has more potential to explain uncertain, inconsistent and indeterminate information in a comprehensive way. And so, Hausdorff distance for SVNSs is important. Thus, we propose two novel schemes to calculate the Hausdorff distance and its corresponding similarity measures (SMs) for SVNSs. In doing so, we firstly develop the two forms of Hausdorff distance between SVNSs based on the definition of Hausdorff metric between two sets. We then use these new distance measures to construct several SMs for SVNSs. Some mathematical theorems regarding the proposed Hausdorff distances for SVNSs are also proven to strengthen its theoretical properties. In order to show the exact calculation behavior and distance measurement mechanism of our proposed methods in accordance with the decorum of Hausdorff metric, we utilize an intuitive numerical example that demonstrate the novelty and practicality of our proposed measures. Furthermore, we develop a multi-criteria decision making (MCDM) method under single-valued neutrosophic environment using the proposed SMs based on our defined Hausdorff distance measures, called as a single-valued neutrosophic MCDM (SVN-MCDM) method. In this connection, we employ our proposed SMs to compute the degree of similarity of each option with the ideal choice to identify the best alternative as well as to perform an overall ranking of the alternatives under study. We then apply our proposed SVN-MCDM scheme to solve two real world problems of MCDM under single-valued neutrosophic environment to show its effectiveness and application. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 2652 KiB  
Article
ROS System Facial Emotion Detection Using Machine Learning for a Low-Cost Robot Based on Raspberry Pi
by Javier Martínez and Julio Vega
Electronics 2023, 12(1), 90; https://doi.org/10.3390/electronics12010090 - 26 Dec 2022
Cited by 5 | Viewed by 2635
Abstract
Facial emotion recognition (FER) is a field of research with multiple solutions in the state-of-the-art, focused on fields such as security, marketing or robotics. In the literature, several articles can be found in which algorithms are presented from different perspectives for detecting emotions. [...] Read more.
Facial emotion recognition (FER) is a field of research with multiple solutions in the state-of-the-art, focused on fields such as security, marketing or robotics. In the literature, several articles can be found in which algorithms are presented from different perspectives for detecting emotions. More specifically, in those emotion detection systems in the literature whose computational cores are low-cost, the results presented are usually in simulation or with quite limited real tests. This article presents a facial emotion detection system—detecting emotions such as anger, happiness, sadness or surprise—that was implemented under the Robot Operating System (ROS), Noetic version, and is based on the latest machine learning (ML) techniques proposed in the state-of-the-art. To make these techniques more efficient, and that they can be executed in real time on a low-cost board, extensive experiments were conducted in a real-world environment using a low-cost general purpose board, the Raspberry Pi 4 Model B. The final achieved FER system proposed in this article is capable of plausibly running in real time, operating at more than 13 fps, without using any external accelerator hardware, as other works (widely introduced in this article) do need in order to achieve the same purpose. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 2911 KiB  
Article
An Approach for Matrix Multiplication of 32-Bit Fixed Point Numbers by Means of 16-Bit SIMD Instructions on DSP
by Ilia Safonov, Anton Kornilov and Daria Makienko
Electronics 2023, 12(1), 78; https://doi.org/10.3390/electronics12010078 - 25 Dec 2022
Cited by 4 | Viewed by 2167
Abstract
Matrix multiplication is an important operation for many engineering applications. Sometimes new features that include matrix multiplication should be added to existing and even out-of-date embedded platforms. In this paper, an unusual problem is considered: how to implement matrix multiplication of 32-bit signed [...] Read more.
Matrix multiplication is an important operation for many engineering applications. Sometimes new features that include matrix multiplication should be added to existing and even out-of-date embedded platforms. In this paper, an unusual problem is considered: how to implement matrix multiplication of 32-bit signed integers and fixed-point numbers on DSP having SIMD instructions for 16-bit integers only. For examined tasks, matrix size may vary from several tens to two hundred. The proposed mathematical approach for dense rectangular matrix multiplication of 32-bit numbers comprises decomposition of 32-bit matrices to matrices of 16-bit numbers, four matrix multiplications of 16-bit unsigned integers via outer product, and correction of outcome for signed integers and fixed point numbers. Several tricks for performance optimization are analyzed. In addition, ways for block-wise and parallel implementations are described. An implementation of the proposed method by means of 16-bit vector instructions is faster than matrix multiplication using 32-bit scalar instructions and demonstrates performance close to a theoretically achievable limit. The described technique can be generalized for matrix multiplication of n-bit integers and fixed point numbers via handling with matrices of n/2-bit integers. In conclusion, recommendations for practitioners who work on implementation of matrix multiplication for various DSP are presented. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 2464 KiB  
Article
Explainable AI to Predict Male Fertility Using Extreme Gradient Boosting Algorithm with SMOTE
by Debasmita GhoshRoy, Parvez Ahmad Alvi and KC Santosh
Electronics 2023, 12(1), 15; https://doi.org/10.3390/electronics12010015 - 21 Dec 2022
Cited by 9 | Viewed by 2065
Abstract
Infertility is a common problem across the world. Infertility distribution due to male factors ranges from 40% to 50%. Existing artificial intelligence (AI) systems are not often human interpretable. Further, clinicians are unaware of how data analytical tools make decisions, and as a [...] Read more.
Infertility is a common problem across the world. Infertility distribution due to male factors ranges from 40% to 50%. Existing artificial intelligence (AI) systems are not often human interpretable. Further, clinicians are unaware of how data analytical tools make decisions, and as a result, they have limited exposure to healthcare. Using explainable AI tools makes AI systems transparent and traceable, enhancing users’ trust and confidence in decision-making. The main contribution of this study is to introduce an explainable model for investigating male fertility prediction. Nine features related to lifestyle and environmental factors are utilized to develop a male fertility prediction model. Five AI tools, namely support vector machine, adaptive boosting, conventional extreme gradient boost (XGB), random forest, and extra tree algorithms are deployed with a balanced and imbalanced dataset. To produce our model in a trustworthy way, an explainable AI is applied. The techniques are (1) local interpretable model-agnostic explanations (LIME) and (2) Shapley additive explanations (SHAP). Additionally, ELI5 is utilized to inspect the feature’s importance. Finally, XGB outperformed and obtained an AUC of 0.98, which is optimal compared to existing AI systems. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

17 pages, 8332 KiB  
Article
Development of Manipulator Digital Twin Experimental Platform Based on RCP
by Zhe Dong, Xiaoyao Han, Yuntao Shi, Weifeng Zhai and Song Luo
Electronics 2022, 11(24), 4196; https://doi.org/10.3390/electronics11244196 - 15 Dec 2022
Cited by 1 | Viewed by 1263
Abstract
From the perspective of teaching and researching, we developed a manipulator digital twin experiment platform (named the remote experience platform, REP) based on a rapid control prototype (RCP). The platform consisted of a controlled target, a real-time controller, rapid prototype configuration software, and [...] Read more.
From the perspective of teaching and researching, we developed a manipulator digital twin experiment platform (named the remote experience platform, REP) based on a rapid control prototype (RCP). The platform consisted of a controlled target, a real-time controller, rapid prototype configuration software, and supervisory control software. The controlled target was a 6-DOF manipulator, divided into a physical entity and its digital twin. The 3D model and mathematical model of the manipulator were constructed as an experimental entity in a digital space. The whole system provided flexible and intuitive experimental scenes without the restraints of time and place. Based on RCP technology, students can design various complex control strategies using simulation tools such as Matlab/Simulink, then convert the graphical model into executable code to be performed in target hardware. The framework and development methods of the proposed system are elaborated in this paper. An example is demonstrated, including invocation of algorithms, one-click code generation and compilation, real-time verification and online parameter adjustment, and more. The feasibility and practicability of the system are verified through the PID control experiment of the manipulator. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

13 pages, 807 KiB  
Article
An Artificial Visual System for Three Dimensional Motion Direction Detection
by Mianzhe Han, Yuki Todo and Zheng Tang
Electronics 2022, 11(24), 4161; https://doi.org/10.3390/electronics11244161 - 13 Dec 2022
Cited by 1 | Viewed by 1021
Abstract
For mammals, enormous amounts of visual information are processed by neurons of the visual nervous system. The research of the direction selectivity is of great significance and local direction-selective ganglion neurons have been discovered. However, research is still at the one dimensional level [...] Read more.
For mammals, enormous amounts of visual information are processed by neurons of the visual nervous system. The research of the direction selectivity is of great significance and local direction-selective ganglion neurons have been discovered. However, research is still at the one dimensional level and concentrated on a single cell. It remains challenging to explain the function and mechanism of the overall motion direction detection. In our previous papers, we have proposed a motion direction detection mechanism on the two dimensional level to solve these problems. The previous studies did not take into account that the information in the left and right retina is different and cannot be used to detect the three dimensional motion direction. Further effort is required to develop a more realistic system in three dimensions. In this paper, we propose a new three-dimensional artificial visual system to extend motion direction detection mechanism into three dimensions. We assumed that a neuron could detect the local motion of a single voxel object within three dimensional space. We also took into consideration that the information of the left and right retinas is different. Based on this binocular disparity, a realistic motion direction mechanism for three dimensions was established: the neurons received signals from the primary visual cortex of each eye and responded to motion in specific directions. There are a series of local direction-selective ganglion neurons arrayed on the retina by a logical AND operation. The response of each local direction detection neuron will be further integrated by the next neural layer to obtain the global motion direction. We carry out several computer simulations to demonstrate the validity of the mechanism. It shows that the proposed mechanism is capable of detecting the motion of complex three dimensional objects, which is consistent with most known physiological experimental results. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

13 pages, 2103 KiB  
Article
Smart Random Walk Distributed Secured Edge Algorithm Using Multi-Regression for Green Network
by Tanzila Saba, Khalid Haseeb, Amjad Rehman, Robertas Damaševičius and Saeed Ali Bahaj
Electronics 2022, 11(24), 4141; https://doi.org/10.3390/electronics11244141 - 12 Dec 2022
Viewed by 1117
Abstract
Smart communication has significantly advanced with the integration of the Internet of Things (IoT). Many devices and online services are utilized in the network system to cope with data gathering and forwarding. Recently, many traffic-aware solutions have explored autonomous systems to attain the [...] Read more.
Smart communication has significantly advanced with the integration of the Internet of Things (IoT). Many devices and online services are utilized in the network system to cope with data gathering and forwarding. Recently, many traffic-aware solutions have explored autonomous systems to attain the intelligent routing and flowing of internet traffic with the support of artificial intelligence. However, the inefficient usage of nodes’ batteries and long-range communication degrades the connectivity time for the deployed sensors with the end devices. Moreover, trustworthy route identification is another significant research challenge for formulating a smart system. Therefore, this paper presents a smart Random walk Distributed Secured Edge algorithm (RDSE), using a multi-regression model for IoT networks, which aims to enhance the stability of the chosen IoT network with the support of an optimal system. In addition, by using secured computing, the proposed architecture increases the trustworthiness of smart devices with the least node complexity. The proposed algorithm differs from other works in terms of the following factors. Firstly, it uses the random walk to form the initial routes with certain probabilities, and later, by exploring a multi-variant function, it attains long-lasting communication with a high degree of network stability. This helps to improve the optimization criteria for the nodes’ communication, and efficiently utilizes energy with the combination of mobile edges. Secondly, the trusted factors successfully identify the normal nodes even when the system is compromised. Therefore, the proposed algorithm reduces data risks and offers a more reliable and private system. In addition, the simulations-based testing reveals the significant performance of the proposed algorithm in comparison to the existing work. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

21 pages, 1977 KiB  
Article
Approach for Designing Real-Time IoT Systems
by Stanisław Deniziak, Mirosław Płaza and Łukasz Arcab
Electronics 2022, 11(24), 4120; https://doi.org/10.3390/electronics11244120 - 10 Dec 2022
Cited by 3 | Viewed by 1353
Abstract
Along with the rapid development of Internet of Things (IoT) technology over the past few years, opportunities for its implementation in service areas that require real-time requirements have begun to be recognized. In this regard, one of the most important criteria is to [...] Read more.
Along with the rapid development of Internet of Things (IoT) technology over the past few years, opportunities for its implementation in service areas that require real-time requirements have begun to be recognized. In this regard, one of the most important criteria is to maintain Quality of Service (QoS) parameters at an appropriate and sufficiently high level. The QoS level should ensure the delivery of data packets in the shortest time possible while preventing critical parameters relevant to real-time transmission from being exceeded. This article proposes a new methodology for designing real-time IoT systems. The premise of the proposed approach is to adapt selected solutions used in other types of systems working with real-time requirements. Some analogy to embedded systems with a distributed architecture has been noted and used in this regard. The main differences from the concept of built-in systems can primarily be seen in the communication layer. The methodology proposed in this article is based on the authors’ proposed model of real-time system functional specification and its mapping to the IoT architecture. In addition, the developed methodology makes extensive use of selected IoT architecture elements described in this article, as well as selected task scheduling methods and communication protocols. The proposed methodology for designing RTIoT systems is based on dedicated transmission serialization methods and dedicated routing protocols. These methods ensure that the time constraints for the assumed bandwidth of IoT links are met by appropriately prioritizing transmissions and determining communication routes. The presented approach can be used to design a broad class of RTIoT systems. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

15 pages, 905 KiB  
Article
Hybrid Encryption Scheme for Medical Imaging Using AutoEncoder and Advanced Encryption Standard
by Yasmeen Alslman, Eman Alnagi, Ashraf Ahmad, Yousef AbuHour, Remah Younisse and Qasem Abu Al-haija
Electronics 2022, 11(23), 3967; https://doi.org/10.3390/electronics11233967 - 30 Nov 2022
Cited by 4 | Viewed by 1453
Abstract
Recently, medical image encryption has gained special attention due to the nature and sensitivity of medical data and the lack of effective image encryption using innovative encryption techniques. Several encryption schemes have been recommended and developed in an attempt to improve medical image [...] Read more.
Recently, medical image encryption has gained special attention due to the nature and sensitivity of medical data and the lack of effective image encryption using innovative encryption techniques. Several encryption schemes have been recommended and developed in an attempt to improve medical image encryption. The majority of these studies rely on conventional encryption techniques. However, such improvements have come with increased computational complexity and slower processing for encryption and decryption processes. Alternatively, the engagement of intelligent models such as deep learning along with encryption schemes exhibited more effective outcomes, especially when used with digital images. This paper aims to reduce and change the transferred data between interested parties and overcome the problem of building negative conclusions from encrypted medical images. In order to do so, the target was to transfer from the domain of encrypting an image to encrypting features of an image, which are extracted as float number values. Therefore, we propose a deep learning-based image encryption scheme using the autoencoder (AE) technique and the advanced encryption standard (AES). Specifically, the proposed encryption scheme is supposed to encrypt the digest of the medical image prepared by the encoder from the autoencoder model on the encryption side. On the decryption side, the analogous decoder from the auto-decoder is used after decrypting the carried data. The autoencoder was used to enhance the quality of corrupted medical images with different types of noise. In addition, we investigated the scores of structure similarity (SSIM) and mean square error (MSE) for the proposed model by applying four different types of noise: salt and pepper, speckle, Poisson, and Gaussian. It has been noticed that for all types of noise added, the decoder reduced this noise in the resulting images. Finally, the performance evaluation demonstrated that our proposed system improved the encryption/decryption overhead by 50–75% over other existing models. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 1910 KiB  
Article
Orientation Detection System Based on Edge-Orientation Selective Neurons
by Tianqi Chen, Bin Li and Yuki Todo
Electronics 2022, 11(23), 3946; https://doi.org/10.3390/electronics11233946 - 29 Nov 2022
Viewed by 1119
Abstract
In this paper, we propose a mechanism of orientation detection system based on edge-orientation selective neurons. We assume that there are neurons in the V1 that can generate response to object’s edge, and each neuron has the optimal response to specific orientation in [...] Read more.
In this paper, we propose a mechanism of orientation detection system based on edge-orientation selective neurons. We assume that there are neurons in the V1 that can generate response to object’s edge, and each neuron has the optimal response to specific orientation in a local receptive field. The global orientation is inferred from the aggregation of local orientation information. An orientation detection system is further developed based on the proposed mechanism. We design four types of neurons for four local orientations and used these neurons to extract local orientation information. The global orientation is obtained according to the neuron with the most activation. The performance of this orientation detection system is evaluated on orientation detection tasks. From the experiment results, we can conclude that our proposed global orientation mechanism is feasible and explainable. The mechanism-based orientation detection system shows better recognition accuracy and noise immunity than the traditional convolution neural network-based orientation detection systems and EfficientNet-based orientation detection system, which have the most accuracy for now. In addition, our edge-orientation selective cell based artificial visual system can greatly save time and learning cost compared to the traditional convolution neural network and EfficientNet. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

13 pages, 4739 KiB  
Article
Detection of Fake Replay Attack Signals on Remote Keyless Controlled Vehicles Using Pre-Trained Deep Neural Network
by Qasem Abu Al-Haija and Abdulaziz A. Alsulami
Electronics 2022, 11(20), 3376; https://doi.org/10.3390/electronics11203376 - 19 Oct 2022
Cited by 8 | Viewed by 2556
Abstract
Keyless systems have replaced the old-fashioned methods of inserting physical keys into keyholes to unlock the door, which are inconvenient and easily exploited by threat actors. Keyless systems use the technology of radio frequency (RF) as an interface to transmit signals from the [...] Read more.
Keyless systems have replaced the old-fashioned methods of inserting physical keys into keyholes to unlock the door, which are inconvenient and easily exploited by threat actors. Keyless systems use the technology of radio frequency (RF) as an interface to transmit signals from the key fob to the vehicle. However, keyless systems are also susceptible to being compromised by a threat actor who intercepts the transmitted signal and performs a replay attack. In this paper, we propose a transfer learning-based model to identify the replay attacks launched against remote keyless controlled vehicles. Specifically, the system makes use of a pre-trained ResNet50 deep neural network to predict the wireless remote signals used to lock or unlock doors of a remote-controlled vehicle system. The signals are finally classified into three classes: real signal, fake signal high gain, and fake signal low gain. We have trained our model with 100 epochs (3800 iterations) on a KeFRA 2022 dataset, a modern dataset. The model has recorded a final validation accuracy of 99.71% and a final validation loss of 0.29% at a low inferencing time of 50 ms for the model-based SGD solver. The experimental evaluation revealed the supremacy of the proposed model. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

13 pages, 1672 KiB  
Article
Majority Approximators for Low-Latency Data Bus Inversion
by Sung-il Pae and Kon-Woo Kwon
Electronics 2022, 11(20), 3352; https://doi.org/10.3390/electronics11203352 - 17 Oct 2022
Viewed by 1316
Abstract
Data bus inversion (DBI) is an encoding technique that saves power in data movement in which the majority function plays an essential role. For a latency optimization, the majority function can be replaced by a majority approximator that allows for a small error [...] Read more.
Data bus inversion (DBI) is an encoding technique that saves power in data movement in which the majority function plays an essential role. For a latency optimization, the majority function can be replaced by a majority approximator that allows for a small error in majority voting to obtain a faster encoder that still saves power. In this work, we propose two systematic approaches for finding high-performance majority approximators. First, we perform an exhaustive search of all possible Boolean functions to find an optimal approximator based on a certain circuit structure comprised of fifteen logic gates. The approximator found by the systematic search can be implemented using compound gates, resulting in a latency-efficient design with only two gate levels. Compared with prior works using a heuristic idea, the proposed circuit runs at the same speed but achieves greater switching activity savings. Second, we propose another majority approximator using the average of three randomly permuted copies of the approximator found in the first approach. We show that the second proposed approximator achieves even higher savings in switching activity as its function is closer to a true majority voter. We report various performance metrics of the newly found majority approximators based on syntheses using a 65 nm process. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

15 pages, 3988 KiB  
Article
Cloud-Based, Expandable—Reconfigurable Remote Laboratory for Electronic Engineering Experiments
by Tinashe Chamunorwa, Horia Alexandru Modran, Doru Ursuțiu, Cornel Samoilă and Horia Hedeșiu
Electronics 2022, 11(20), 3292; https://doi.org/10.3390/electronics11203292 - 12 Oct 2022
Cited by 2 | Viewed by 2152
Abstract
This article describes the design and development of the NI myRIO device-based remote laboratory. The cloud-based, expandable, and reconfigurable remote laboratory is intended to give students access to an online web-based user interface to perform experiments. Multiple myRIO devices are programmed to host [...] Read more.
This article describes the design and development of the NI myRIO device-based remote laboratory. The cloud-based, expandable, and reconfigurable remote laboratory is intended to give students access to an online web-based user interface to perform experiments. Multiple myRIO devices are programmed to host several experiments each. A finite state machine is used to select specific experiments, while a single state can contain several. The laboratory web virtual instruments interfaces are hosted on the SystemLink cloud and SystemLink server. A user-friendly interface has been designed to help students to understand important electronic concepts. Virtual and real experiments were fused to give students a wide range of experiments they can access online. The instructor can check outputs of an experiment being executed on the device. Achieving connection between myRIO and SystemLink through global variables and SystemLink ensured that the low-cost device was utilized, and this is suitable for third-world countries’ universities that cannot afford expensive equipment. Students can perform the experiments which have some resemblance to physical execution. The system is expandable in that the number of myRIO devices or number of experiments can be increased to suit changing requirements. The reconfigurability of the system is such that the finite state machine-based coding technique permits only one experiment to be selected, configure the system, and run while other experiments are idle. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

24 pages, 1416 KiB  
Article
A Secure Personal Health Record Sharing System with Key Aggregate Dynamic Searchable Encryption
by Jihyeon Oh, JoonYoung Lee, MyeongHyun Kim, Youngho Park, KiSung Park and SungKee Noh
Electronics 2022, 11(19), 3199; https://doi.org/10.3390/electronics11193199 - 06 Oct 2022
Cited by 1 | Viewed by 1601
Abstract
Recently, as interest in individualized health has increased, the Personal Health Record (PHR) has attracted a lot of attention for prognosis predictions and accurate diagnoses. Cloud servers have been used to manage the PHR system, but privacy concerns are evident since cloud servers [...] Read more.
Recently, as interest in individualized health has increased, the Personal Health Record (PHR) has attracted a lot of attention for prognosis predictions and accurate diagnoses. Cloud servers have been used to manage the PHR system, but privacy concerns are evident since cloud servers process the entire PHR, which contains the sensitive information of patients. In addition, cloud servers centrally manage the PHR system so patients lose direct control over their own PHR and cloud servers can be an attractive target for malicious users. Therefore, ensuring the integrity and privacy of the PHR and allocating authorization to users are important issues. In this paper, we propose a secure PHR sharing system using a blockchain, InterPlanetary File System (IPFS), and smart contract to ensure PHR integrity and secure verification. To guarantee the patient’s authority over the management of his/her own PHR, as well as provide convenient access, we suggest a key aggregate dynamic searchable encryption. We prove the security of the proposed scheme through informal and formal analyses including an Automated Verification of Internet Security Protocols and Applications (AVISPA) simulation, Burrows–Abadi–Needham (BAN) logic, and security-model-based games. Furthermore, we estimate the computational costs of the proposed scheme using a Multiprecision Integer and Rational Arithmetic Cryptographic Library (MIRACL) and compare the results with those of previous works. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 1798 KiB  
Article
Leveraging Machine Learning for Fault-Tolerant Air Pollutants Monitoring for a Smart City Design
by Muneeb A. Khan, Hyun-chul Kim and Heemin Park
Electronics 2022, 11(19), 3122; https://doi.org/10.3390/electronics11193122 - 29 Sep 2022
Cited by 4 | Viewed by 1327
Abstract
Air pollution has become a global issue due to its widespread impact on the environment, economy, civilization and human health. Owing to this, a lot of research and studies have been done to tackle this issue. However, most of the existing methodologies have [...] Read more.
Air pollution has become a global issue due to its widespread impact on the environment, economy, civilization and human health. Owing to this, a lot of research and studies have been done to tackle this issue. However, most of the existing methodologies have several issues such as high cost, low deployment, maintenance capabilities and uni-or bi-variate concentration of air pollutants. In this paper, a hybrid CNN-LSTM model is presented to forecast multivariate air pollutant concentration for the Internet of Things (IoT) enabled smart city design. The amalgamation of CNN-LSTM acts as an encoder-decoder which improves the overall accuracy and precision. The performance of the proposed CNN-LSTM is compared with conventional and hybrid machine learning (ML) models on the basis of Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and Mean Square Error (MSE). The proposed model outperforms various state-of-the-art ML models by generating an average MAE, MAPE and MSE of 54.80%, 52.78% and 60.02%. Furthermore, the predicted results are cross-validated with the actual concentration of air pollutants and the proposed model achieves a high degree of prediction accuracy to real-time air pollutants concentration. Moreover, a cross-grid cooperative scheme is proposed to tackle the IoT monitoring station malfunction scenario and make the pollutant monitoring more fault resistant and robust. The proposed scheme exploits the correlation between neighbouring monitoring stations and air pollutant concentration. The model generates an average MAPE and MSE of 10.90% and 12.02%, respectively. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 2000 KiB  
Article
Context-Based, Predictive Access Control to Electronic Health Records
by Evgenia Psarra, Dimitris Apostolou, Yiannis Verginadis, Ioannis Patiniotakis and Gregoris Mentzas
Electronics 2022, 11(19), 3040; https://doi.org/10.3390/electronics11193040 - 24 Sep 2022
Cited by 2 | Viewed by 1484
Abstract
Effective access control techniques are in demand, as electronically assisted healthcare services require the patient’s sensitive health records. In emergency situations, where the patient’s well-being is jeopardized, different healthcare actors associated with emergency cases should be granted permission to access Electronic Health Records [...] Read more.
Effective access control techniques are in demand, as electronically assisted healthcare services require the patient’s sensitive health records. In emergency situations, where the patient’s well-being is jeopardized, different healthcare actors associated with emergency cases should be granted permission to access Electronic Health Records (EHRs) of patients. The research objective of our study is to develop machine learning techniques based on patients’ time sequential health metrics and integrate them with an Attribute Based Access Control (ABAC) mechanism. We propose an ABAC mechanism that can yield access to sensitive EHRs systems by applying prognostic context handlers where contextual information, is used to identify emergency conditions and permit access to medical records. Specifically, we use patients’ recent health history to predict the health metrics for the next two hours by leveraging Long Short Term Memory (LSTM) Neural Networks (NNs). These predicted health metrics values are evaluated by our personalized fuzzy context handlers, to predict the criticality of patients’ status. The developed access control method provides secure access for emergency clinicians to sensitive information and simultaneously safeguards the patient’s well-being. Integrating this predictive mechanism with personalized context handlers proved to be a robust tool to enhance the performance of the access control mechanism to modern EHRs System. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

21 pages, 3080 KiB  
Article
RISC-Vlim, a RISC-V Framework for Logic-in-Memory Architectures
by Andrea Coluccio, Antonia Ieva, Fabrizio Riente, Massimo Ruo Roch, Marco Ottavi and Marco Vacca
Electronics 2022, 11(19), 2990; https://doi.org/10.3390/electronics11192990 - 21 Sep 2022
Cited by 3 | Viewed by 17825
Abstract
Most modern CPU architectures are based on the von Neumann principle, where memory and processing units are separate entities. Although processing unit performance has improved over the years, memory capacity has not followed the same trend, creating a performance gap between them. This [...] Read more.
Most modern CPU architectures are based on the von Neumann principle, where memory and processing units are separate entities. Although processing unit performance has improved over the years, memory capacity has not followed the same trend, creating a performance gap between them. This problem is known as the "memory wall" and severely limits the performance of a microprocessor. One of the most promising solutions is the "logic-in-memory" approach. It consists of merging memory and logic units, enabling data to be processed directly inside the memory itself. Here we propose an RISC-V framework that supports logic-in-memory operations. We substitute data memory with a circuit capable of storing data and of performing in-memory computation. The framework is based on a standard memory interface, so different logic-in-memory architectures can be inserted inside the microprocessor, based both on CMOS and emerging technologies. The main advantage of this framework is the possibility of comparing the performance of different logic-in-memory solutions on code execution. We demonstrate the effectiveness of the framework using a CMOS volatile memory and a memory based on a new emerging technology, racetrack logic. The results demonstrate an improvement in algorithm execution speed and a reduction in energy consumption. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

26 pages, 2347 KiB  
Article
A Multi-Objective Approach for Optimizing Edge-Based Resource Allocation Using TOPSIS
by Habiba Mohamed, Eyhab Al-Masri, Olivera Kotevska and Alireza Souri
Electronics 2022, 11(18), 2888; https://doi.org/10.3390/electronics11182888 - 13 Sep 2022
Cited by 6 | Viewed by 2017
Abstract
Existing approaches for allocating resources on edge environments are inefficient and lack the support of heterogeneous edge devices, which in turn fail to optimize the dependency on cloud infrastructures or datacenters. To this extent, we propose in this paper OpERA, a multi-layered edge-based [...] Read more.
Existing approaches for allocating resources on edge environments are inefficient and lack the support of heterogeneous edge devices, which in turn fail to optimize the dependency on cloud infrastructures or datacenters. To this extent, we propose in this paper OpERA, a multi-layered edge-based resource allocation optimization framework that supports heterogeneous and seamless execution of offloadable tasks across edge, fog, and cloud computing layers and architectures. By capturing offloadable task requirements, OpERA is capable of identifying suitable resources within nearby edge or fog layers, thus optimizing the execution process. Throughout the paper, we present results which show the effectiveness of our proposed optimization strategy in terms of reducing costs, minimizing energy consumption, and promoting other residual gains in terms of processing computations, network bandwidth, and task execution time. We also demonstrate that by optimizing resource allocation in computation offloading, it is then possible to increase the likelihood of successful task offloading, particularly for computationally intensive tasks that are becoming integral as part of many IoT applications such robotic surgery, autonomous driving, smart city monitoring device grids, and deep learning tasks. The evaluation of our OpERA optimization algorithm reveals that the TOPSIS MCDM technique effectively identifies optimal compute resources for processing offloadable tasks, with a 96% success rate. Moreover, the results from our experiments with a diverse range of use cases show that our OpERA optimization strategy can effectively reduce energy consumption by up to 88%, and operational costs by 76%, by identifying relevant compute resources. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

11 pages, 1511 KiB  
Article
Multi-Class Positive and Unlabeled Learning for High Dimensional Data Based on Outlier Detection in a Low Dimensional Embedding Space
by Cheong Hee Park
Electronics 2022, 11(17), 2789; https://doi.org/10.3390/electronics11172789 - 05 Sep 2022
Viewed by 1399
Abstract
Positive and unlabeled (PU) learning targets a binary classifier on labeled positive data and unlabeled data containing data samples of positive and unknown negative classes, whereas multi-class positive and unlabeled (MPU) learning aims to learn a multi-class classifier assuming labeled data from multiple [...] Read more.
Positive and unlabeled (PU) learning targets a binary classifier on labeled positive data and unlabeled data containing data samples of positive and unknown negative classes, whereas multi-class positive and unlabeled (MPU) learning aims to learn a multi-class classifier assuming labeled data from multiple positive classes. In this paper, we propose a two-step approach for MPU learning on high dimensional data. In the first step, negative samples are selected from unlabeled data using an ensemble of k-nearest neighbors-based outlier detection models in a low dimensional space which is embedded by a linear discriminant function. We present an approach for binary prediction which determines whether a data sample is a negative data sample. In the second step, the linear discriminant function is optimized on the labeled positive data and negative samples selected in the first step. It alternates between updating the parameters of the linear discriminant function and selecting reliable negative samples by detecting outliers in a low-dimensional space. Experimental results using high dimensional text data demonstrate the high performance of the proposed MPU learning method. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

20 pages, 886 KiB  
Article
Built-In Functional Testing of Analog In-Memory Accelerators for Deep Neural Networks
by Abhishek Kumar Mishra , Anup Kumar Das and Nagarajan Kandasamy
Electronics 2022, 11(16), 2592; https://doi.org/10.3390/electronics11162592 - 18 Aug 2022
Cited by 1 | Viewed by 1409
Abstract
The paper develops a methodology for the online built-in self-testing of deep neural network (DNN) accelerators to validate the correct operation with respect to their functional specifications. The DNN of interest is realized in the hardware to perform in-memory computing using non-volatile memory [...] Read more.
The paper develops a methodology for the online built-in self-testing of deep neural network (DNN) accelerators to validate the correct operation with respect to their functional specifications. The DNN of interest is realized in the hardware to perform in-memory computing using non-volatile memory cells as computational units. Assuming a functional fault model, we develop methods to generate pseudorandom and structured test patterns to detect hardware faults. We also develop a test-sequencing strategy that combines these different classes of tests to achieve high fault coverage. The testing methodology is applied to a broad class of DNNs trained to classify images from the MNIST, Fashion-MNIST, and CIFAR-10 datasets. The goal is to expose hardware faults which may lead to the incorrect classification of images. We achieve an average fault coverage of 94% for these different architectures, some of which are large and complex. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 1412 KiB  
Article
A Sustainable Deep Learning-Based Framework for Automated Segmentation of COVID-19 Infected Regions: Using U-Net with an Attention Mechanism and Boundary Loss Function
by Imran Ahmed, Abdellah Chehri and Gwanggil Jeon
Electronics 2022, 11(15), 2296; https://doi.org/10.3390/electronics11152296 - 23 Jul 2022
Cited by 14 | Viewed by 1987
Abstract
COVID-19 has been spreading rapidly, affecting billions of people globally, with significant public health impacts. Biomedical imaging, such as computed tomography (CT), has significant potential as a possible substitute for the screening process. Because of this, automatic segmentation of images is highly desirable [...] Read more.
COVID-19 has been spreading rapidly, affecting billions of people globally, with significant public health impacts. Biomedical imaging, such as computed tomography (CT), has significant potential as a possible substitute for the screening process. Because of this, automatic segmentation of images is highly desirable as clinical decision support for an extensive evaluation of disease control and monitoring. It is a dynamic tool and performs a central role in precise or accurate segmentation of infected areas or regions in CT scans, thus helping in screening, diagnosing, and disease monitoring. For this purpose, we introduced a deep learning framework for automated segmentation of COVID-19 infected lesions/regions in lung CT scan images. Specifically, we adopted a segmentation model, i.e., U-Net, and utilized an attention mechanism to enhance the framework’s ability for the segmentation of virus-infected regions. Since all of the features extracted or obtained from the encoders are not valuable for segmentation; thus, we applied the U-Net architecture with a mechanism of attention for a better representation of the features. Moreover, we applied a boundary loss function to deal with small and unbalanced lesion segmentation’s. Using different public CT scan image data sets, we validated the framework’s effectiveness in contrast with other segmentation techniques. The experimental outcomes showed the improved performance of the presented framework for the automated segmentation of lungs and infected areas in CT scan images. We also considered both boundary loss and weighted binary cross-entropy dice loss function. The overall dice accuracies of the framework are 0.93 and 0.76 for lungs and COVID-19 infected areas/regions. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 3564 KiB  
Article
ECG Heartbeat Classification Using CONVXGB Model
by Atiaf A. Rawi, Murtada K. Elbashir and Awadallah M. Ahmed
Electronics 2022, 11(15), 2280; https://doi.org/10.3390/electronics11152280 - 22 Jul 2022
Cited by 3 | Viewed by 2264
Abstract
ELECTROCARDIOGRAM (ECG) signals are reliable in identifying and monitoring patients with various cardiac diseases and severe cardiovascular syndromes, including arrhythmia and myocardial infarction (MI). Thus, cardiologists use ECG signals in diagnosing cardiac diseases. Machine learning (ML) has also proven its usefulness in the [...] Read more.
ELECTROCARDIOGRAM (ECG) signals are reliable in identifying and monitoring patients with various cardiac diseases and severe cardiovascular syndromes, including arrhythmia and myocardial infarction (MI). Thus, cardiologists use ECG signals in diagnosing cardiac diseases. Machine learning (ML) has also proven its usefulness in the medical field and in signal classification. However, current ML approaches rely on hand-crafted feature extraction methods or very complicated deep learning networks. This paper presents a novel method for feature extraction from ECG signals and ECG classification using a convolutional neural network (CNN) with eXtreme Gradient Boosting (XBoost), ConvXGB. This model was established by stacking two convolutional layers for automatic feature extraction from ECG signals, followed by XGBoost as the last layer, which is used for classification. This technique simplified ECG classification in comparison to other methods by minimizing the number of required parameters and eliminating the need for weight readjustment throughout the backpropagation phase. Furthermore, experiments on two famous ECG datasets–the Massachusetts Institute of Technology–Beth Israel Hospital (MIT-BIH) and Physikalisch-Technische Bundesanstalt (PTB) datasets–demonstrated that this technique handled the ECG signal classification issue better than either CNN or XGBoost alone. In addition, a comparison showed that this model outperformed state-of-the-art models, with scores of 0.9938, 0.9839, 0.9836, 0.9837, and 0.9911 for accuracy, precision, recall, F1-score, and specificity, respectively. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 1368 KiB  
Article
To Use or Not to Use: Impact of Personality on the Intention of Using Gamified Learning Environments
by Mouna Denden, Ahmed Tlili, Mourad Abed, Aras Bozkurt, Ronghuai Huang and Daniel Burgos
Electronics 2022, 11(12), 1907; https://doi.org/10.3390/electronics11121907 - 18 Jun 2022
Cited by 5 | Viewed by 2574
Abstract
Technology acceptance is essential for technology success. However, individual users are known to differ in their tendency to adopt and interact with new technologies. Among the individual differences, personality has been shown to be a predictor of users’ beliefs about technology acceptance. Gamification, [...] Read more.
Technology acceptance is essential for technology success. However, individual users are known to differ in their tendency to adopt and interact with new technologies. Among the individual differences, personality has been shown to be a predictor of users’ beliefs about technology acceptance. Gamification, on the other hand, has been shown to be a good solution to improve students’ motivation and engagement while learning. Despite the growing interest in gamification, less research attention has been paid to the effect of personality, specifically based on the Five Factor model (FFM), on gamification acceptance in learning environments. Therefore, this study develops a model to elucidate how personality traits affect students’ acceptance of gamified learning environments and their continuance intention to use these environments. In particular, the Technology Acceptance Model (TAM) was used to examine the factors affecting students’ intentions to use a gamified learning environment. To test the research hypotheses, eighty-three students participated in this study, where structural equation modeling via Partial Least Squares (PLS) was performed. The obtained results showed that the research model, based on TAM and FFM, provides a comprehensive understanding of the behaviors related to the acceptance and intention to use gamified learning environments, as follows: (1) usefulness is the most influential factor toward intention to use the gamified learning environment; (2) unexpectedly, perceived ease of use has no significant effect on perceived usefulness and behavioral attitudes toward the gamified learning environment; (3) extraversion affects students’ perceived ease of use of the gamified learning environment; (4) neuroticism affects students’ perceived usefulness of the gamified learning environment; and, (5) Openness affects students’ behavioral attitudes toward using the gamified learning environment. This study can contribute to the Human–Computer Interaction field by providing researchers and practitioners with insights into how to motivate different students’ personality characteristics to continue using gamified learning environments for each personality trait. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

14 pages, 2087 KiB  
Article
Financial Data Anomaly Discovery Using Behavioral Change Indicators
by Audrius Lopata, Saulius Gudas, Rimantas Butleris, Vytautas Rudžionis, Liutauras Žioba, Ilona Veitaitė, Darius Dilijonas, Evaldas Grišius and Maarten Zwitserloot
Electronics 2022, 11(10), 1598; https://doi.org/10.3390/electronics11101598 - 17 May 2022
Cited by 1 | Viewed by 1589
Abstract
In this article we present an approach to financial data analysis and anomaly discovery. In our view, the assessment of performance management requires the monitoring of financial performance indicators (KPIs) and the characteristics of changes in KPIs over time. Based on this assumption, [...] Read more.
In this article we present an approach to financial data analysis and anomaly discovery. In our view, the assessment of performance management requires the monitoring of financial performance indicators (KPIs) and the characteristics of changes in KPIs over time. Based on this assumption, behavioral change indicators (BCIs) are introduced to detect and evaluate the changes in traditional KPIs in time series. Three types of BCIs are defined: absolute change indicators (BCI-A), relative change indicators (ratio indicators BCI-RE), and delta change indicators (D-BCI). The technique and advantages of using BCIs to identify unexpected deviations and assess the nature of KPI value changes in time series are discussed and illustrated in case studies. The architecture of the financial data analysis system for financial data anomaly detection is presented. The system prototype uses the Camunda business rules engine to specify KPIs and BCI thresholds. The prototype was successfully put into practice for an analysis of actual financial records (historical data). Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

17 pages, 3158 KiB  
Article
Inter-continental Data Centre Power Load Balancing for Renewable Energy Maximisation
by Rasoul Rahmani, Irene Moser and Antonio L. Cricenti
Electronics 2022, 11(10), 1564; https://doi.org/10.3390/electronics11101564 - 13 May 2022
Cited by 1 | Viewed by 1779
Abstract
The ever increasing popularity of Cloud and similar services pushes the demand for data centres, which have a high power consumption. In an attempt to increase the sustainability of the power generation, data centres have been fed by microgrids which include renewable generation—so-called [...] Read more.
The ever increasing popularity of Cloud and similar services pushes the demand for data centres, which have a high power consumption. In an attempt to increase the sustainability of the power generation, data centres have been fed by microgrids which include renewable generation—so-called ‘green data centres’. However, the peak load of data centres often does not coincide with solar generation, because demand mostly peaks in the evening. Shifting power to data centres incurs transmission losses; shifting the data transmission has no such drawback. We demonstrate the effectivity of computational load shifting between data centres located in different time zones using a case study that balances demands between three data centres on three continents. This study contributes a method that exploits the opportunities provided by the varied timing of peak solar generation across the globe, transferring computation load to data centres that have sufficient renewable energy whenever possible. Our study shows that balancing computation loads between three green data centres on three continents can improve the use of renewables by up to 22%. Assuming the grid energy does not include renewables, this amounts to a 13% reduction in CO2 emissions. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

14 pages, 2450 KiB  
Article
Transfer Learning Improving Predictive Mortality Models for Patients in End-Stage Renal Disease
by Edwar Macias, Jose Lopez Vicario, Javier Serrano, Jose Ibeas and Antoni Morell
Electronics 2022, 11(9), 1447; https://doi.org/10.3390/electronics11091447 - 30 Apr 2022
Cited by 1 | Viewed by 1520
Abstract
Deep learning is becoming a fundamental piece in the paradigm shift from evidence-based to data-based medicine. However, its learning capacity is rarely exploited when working with small data sets. Through transfer learning (TL), information from a source domain is transferred to a target [...] Read more.
Deep learning is becoming a fundamental piece in the paradigm shift from evidence-based to data-based medicine. However, its learning capacity is rarely exploited when working with small data sets. Through transfer learning (TL), information from a source domain is transferred to a target one to enhance a learning task in such domain. The proposed TL mechanisms are based on sample and feature space augmentation. Thus, deep autoencoders extract complex representations for the data in the TL approach. Their latent representations, the so-called codes, are handled to transfer information among domains. The transfer of samples is carried out by computing a latent space mapping matrix that links codes from both domains for later reconstruction. The feature space augmentation is based on the computation of the average of the most similar codes from one domain. Such an average augments the features in a target domain. The proposed framework is evaluated in the prediction of mortality in patients in end-stage renal disease, transferring information related to the mortality of patients with acute kidney injury from the massive database MIMIC-III. Compared to other TL mechanisms, the proposed approach improves 6–11% in previous mortality predictive models. The integration of TL approaches into learning tasks in pathologies with data volume issues could encourage the use of data-based medicine in a clinical setting. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

13 pages, 4871 KiB  
Article
An Artificial Visual System for Motion Direction Detection Based on the Hassenstein–Reichardt Correlator Model
by Chenyang Yan, Yuki Todo, Yuki Kobayashi, Zheng Tang and Bin Li
Electronics 2022, 11(9), 1423; https://doi.org/10.3390/electronics11091423 - 28 Apr 2022
Cited by 5 | Viewed by 1973
Abstract
The perception of motion direction is essential for the survival of visual animals. Despite various theoretical and biophysical investigations that have been conducted to elucidate directional selectivity at the neural level, the systemic mechanism of motion direction detection remains elusive. Here, we develop [...] Read more.
The perception of motion direction is essential for the survival of visual animals. Despite various theoretical and biophysical investigations that have been conducted to elucidate directional selectivity at the neural level, the systemic mechanism of motion direction detection remains elusive. Here, we develop an artificial visual system (AVS) based on the core computation of the Hassenstein–Reichardt correlator (HRC) model for global motion direction detection. With reference to the biological investigations of Drosophila, we first describe a local motion-sensitive, directionally detective neuron that only responds to ON motion signals with high pattern contrast in a particular direction. Then, we use the full-neurons scheme motion direction detection mechanism to detect the global motion direction based on our previous research. The mechanism enables our AVS to detect multiple directions in a two-dimensional view, and the global motion direction is inferred from the outputs of all local motion-sensitive directionally detective neurons. To verify the reliability of our AVS, we conduct a series of experiments and compare its performance with the time-considered convolution neural network (CNN) and the EfficientNetB0 under the same conditions. The experimental results demonstrated that our system is reliable in detecting the direction of motion, and among the three models, our AVS has better motion direction detection capabilities. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Graphical abstract

18 pages, 3965 KiB  
Article
ARMatrix: An Interactive Item-to-Rule Matrix for Association Rules Visual Analytics
by Rakshit Varu, Leonardo Christino and Fernando V. Paulovich
Electronics 2022, 11(9), 1344; https://doi.org/10.3390/electronics11091344 - 23 Apr 2022
Cited by 2 | Viewed by 1545
Abstract
Amongst the data mining techniques for exploratory analysis, association rule mining is a popular strategy given its ability to find causal rules between items to express regularities in a database. With large datasets, many rules can be generated, and visualization has shown to [...] Read more.
Amongst the data mining techniques for exploratory analysis, association rule mining is a popular strategy given its ability to find causal rules between items to express regularities in a database. With large datasets, many rules can be generated, and visualization has shown to be instrumental in such scenarios. Despite the relative success, existing visual representations are limited and suffer from analytical capability and low interactive support issues. This paper presents ARMatrix, a visual analytics framework for the analysis of association rules based on an interactive item-to-rule matrix metaphor which aims to help users to navigate sets of rules and get insights about co-occurrence patterns. The usability of the proposed framework is illustrated using two user scenarios and then confirmed from the feedback received through a user test with 20 participants. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

19 pages, 2171 KiB  
Article
Deep Learning Methods for Accurate Skin Cancer Recognition and Mobile Application
by Ioannis Kousis, Isidoros Perikos, Ioannis Hatzilygeroudis and Maria Virvou
Electronics 2022, 11(9), 1294; https://doi.org/10.3390/electronics11091294 - 19 Apr 2022
Cited by 47 | Viewed by 5297
Abstract
Although many efforts have been made through past years, skin cancer recognition from medical images is still an active area of research aiming at more accurate results. Many efforts have been made in recent years based on deep learning neural networks. Only a [...] Read more.
Although many efforts have been made through past years, skin cancer recognition from medical images is still an active area of research aiming at more accurate results. Many efforts have been made in recent years based on deep learning neural networks. Only a few, however, are based on a single deep learning model and targeted to create a mobile application. Contributing to both efforts, first we present a summary of the required medical knowledge on skin cancer, followed by an extensive summary of the most recent related works. Afterwards, we present 11 CNN (convolutional neural network) candidate single architectures. We train and test those 11 CNN architectures, using the HAM10000 dataset, concerning seven skin lesion classes. To face the imbalance problem and the high similarity between images of some skin lesions, we apply data augmentation (during training), transfer learning and fine-tuning. From the 11 CNN architecture configurations, DenseNet169 produced the best results. It achieved an accuracy of 92.25%, a recall (sensitivity) of 93.59% and an F1-score of 93.27%, which outperforms existing state-of-the-art efforts. We used a light version of DenseNet169 in constructing a mobile android application, which was mapped as a two-class model (benign or malignant). A picture is taken via the mobile device camera, and after manual cropping, it is classified into benign or malignant type. The application can also inform the user about the allowed sun exposition time based on the current UV radiation degree, the phototype of the user’s skin and the degree of the used sunscreen. In conclusion, we achieved state-of-the-art results in skin cancer recognition based on a single, relatively light deep learning model, which we also used in a mobile application. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

34 pages, 6620 KiB  
Article
LidSonic for Visually Impaired: Green Machine Learning-Based Assistive Smart Glasses with Smart App and Arduino
by Sahar Busaeed, Rashid Mehmood, Iyad Katib and Juan M. Corchado
Electronics 2022, 11(7), 1076; https://doi.org/10.3390/electronics11071076 - 29 Mar 2022
Cited by 13 | Viewed by 6390
Abstract
Smart wearable technologies such as fitness trackers are creating many new opportunities to improve the quality of life for everyone. It is usually impossible for visually impaired people to orientate themselves in large spaces and navigate an unfamiliar area without external assistance. The [...] Read more.
Smart wearable technologies such as fitness trackers are creating many new opportunities to improve the quality of life for everyone. It is usually impossible for visually impaired people to orientate themselves in large spaces and navigate an unfamiliar area without external assistance. The design space for assistive technologies for the visually impaired is complex, involving many design parameters including reliability, transparent object detection, handsfree operations, high-speed real-time operations, low battery usage, low computation and memory requirements, ensuring that it is lightweight, and price affordability. State-of-the-art visually impaired devices lack maturity, and they do not fully meet user satisfaction, thus more effort is required to bring innovation to this field. In this work, we develop a pair of smart glasses called LidSonic that uses machine learning, LiDAR, and ultrasonic sensors to identify obstacles. The LidSonic system comprises an Arduino Uno device located in the smart glasses and a smartphone app that communicates data using Bluetooth. Arduino collects data, manages the sensors on smart glasses, detects objects using simple data processing, and provides buzzer warnings to visually impaired users. The smartphone app receives data from Arduino, detects and identifies objects in the spatial environment, and provides verbal feedback about the object to the user. Compared to image processing-based glasses, LidSonic requires much less processing time and energy to classify objects using simple LiDAR data containing 45-integer readings. We provide a detailed description of the system hardware and software design, and its evaluation using nine machine learning algorithms. The data for the training and validation of machine learning models are collected from real spatial environments. We developed the complete LidSonic system using off-the-shelf inexpensive sensors and a microcontroller board costing less than USD 80. The intention is to provide a design of an inexpensive, miniature, green device that can be built into, or mounted on, any pair of glasses or even a wheelchair to help the visually impaired. This work is expected to open new directions for smart glasses design using open software tools and off-the-shelf hardware. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

13 pages, 536 KiB  
Article
A Partial-Reconfiguration-Enabled HW/SW Co-Design Benchmark for LTE Applications
by Ali Hosseinghorban and Akash Kumar
Electronics 2022, 11(7), 978; https://doi.org/10.3390/electronics11070978 - 22 Mar 2022
Cited by 1 | Viewed by 2126
Abstract
Rapid and continuous evolution in telecommunication standards and applications has increased the demand for a platform with high parallelization capability, high flexibility, and low power consumption. FPGAs are known platforms that can provide all these requirements. However, the evaluation of approaches, architectures, and [...] Read more.
Rapid and continuous evolution in telecommunication standards and applications has increased the demand for a platform with high parallelization capability, high flexibility, and low power consumption. FPGAs are known platforms that can provide all these requirements. However, the evaluation of approaches, architectures, and scheduling policies in this era requires a suitable and open-source benchmark suite that runs on FPGA. This paper harnesses high-level synthesis tools to implement high-performance, resource-efficient, and easy-maintenance kernels for FPGAs. We provide various implementations of each kernel of PHY-Bench and WiBench, which are the most well-known benchmark suites for telecommunication applications on FPGAs. We analyze the execution time and power consumption of different kernels on ARM processors and FPGA. We have made all sources and documentation public for the benefit of the research community. The codes are flexible, and all kernels can easily be regenerated for different sizes. The results show that the FPGA can increase the speed by up to 19.4 times. Furthermore, we show that the power consumption of the FPGA can be reduced by up to 45% by partially reconfiguring a kernel that fits the size of the input data instead of using a large kernel that supports all inputs. We also show that partial reconfiguration can improve the execution time for processing a sub-frame in the uplink application by 33% compared to an FPGA-based approach without partial reconfiguration. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 1214 KiB  
Article
Approximate Entropy of Spiking Series Reveals Different Dynamical States in Cortical Assemblies
by Leonardo Ermini, Paolo Massobrio and Luca Mesin
Electronics 2022, 11(6), 936; https://doi.org/10.3390/electronics11060936 - 17 Mar 2022
Cited by 1 | Viewed by 1534
Abstract
Self-organized criticality theory proved that information transmission and computational performances of neural networks are optimal in critical state. By using recordings of the spontaneous activity originated by dissociated neuronal assemblies coupled to Micro-Electrode Arrays (MEAs), we tested this hypothesis using Approximate Entropy (ApEn) [...] Read more.
Self-organized criticality theory proved that information transmission and computational performances of neural networks are optimal in critical state. By using recordings of the spontaneous activity originated by dissociated neuronal assemblies coupled to Micro-Electrode Arrays (MEAs), we tested this hypothesis using Approximate Entropy (ApEn) as a measure of complexity and information transfer. We analysed 60 min of electrophysiological activity of three neuronal cultures exhibiting either sub-critical, critical or super-critical behaviour. The firing patterns on each electrode was studied in terms of the inter-spike interval (ISI), whose complexity was quantified using ApEn. We assessed that in critical state the local complexity (measured in terms of ApEn) is larger than in sub- and super-critical conditions (mean ± std, ApEn about 0.93 ± 0.09, 0.66 ± 0.18, 0.49 ± 0.27, for the cultures in critical, sub-critical and super-critical state, respectively—differences statistically significant). Our estimations were stable when considering epochs as short as 5 min (pairwise cross-correlation of spatial distribution of mean ApEn of 94 ± 5%). These preliminary results indicate that ApEn has the potential of being a reliable and stable index to monitor local information transmission in a neuronal network during maturation. Thus, ApEn applied on ISI time series appears to be potentially useful to reflect the overall complex behaviour of the neural network, even monitoring a single specific location. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

19 pages, 2304 KiB  
Article
SVM-Based Blood Exam Classification for Predicting Defining Factors in Metabolic Syndrome Diagnosis
by Dimitrios P. Panagoulias, Dionisios N. Sotiropoulos and George A. Tsihrintzis
Electronics 2022, 11(6), 857; https://doi.org/10.3390/electronics11060857 - 09 Mar 2022
Cited by 10 | Viewed by 2102
Abstract
Biomarkers have already been proposed as powerful classification features for use in the training of neural network-based and other machine learning and artificial intelligence-based prognostic models in the scientific field of personalized nutrition. In this paper, we construct and study cascaded SVM-based classifiers [...] Read more.
Biomarkers have already been proposed as powerful classification features for use in the training of neural network-based and other machine learning and artificial intelligence-based prognostic models in the scientific field of personalized nutrition. In this paper, we construct and study cascaded SVM-based classifiers for automated metabolic syndrome diagnosis. Specifically, using blood exams, we achieve an average accuracy of about 84% in correctly classifying body mass index. Similarly, cascaded SVM-based classifiers achieve a 74% accuracy in correctly classifying systolic blood pressure. Next, we propose and implement a system that achieves an 84% accuracy in metabolic syndrome prediction. The proposed system relies not only on prediction of the body mass index but also on prediction from blood exams of total cholesterol, triglycerides and glucose. For the aim of self-completeness of the paper, the key concepts with regard to metabolic syndrome are summarized, and a review of previous related work is included. Finally, conclusions are drawn and indications for related future research are outlined. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

Review

Jump to: Research

17 pages, 571 KiB  
Review
Robust Optimization over Time Problems—Characterization and Literature Review
by Pavel Novoa-Hernández, Amilkar Puris and David A. Pelta
Electronics 2023, 12(22), 4609; https://doi.org/10.3390/electronics12224609 - 11 Nov 2023
Viewed by 729
Abstract
Robust optimization over time (ROOT) is a relatively recent topic in the field of dynamic evolutionary optimization (EDO). The goal of ROOT problems is to find the optimal solution for several environments at the same time. Although significant contributions to ROOT have been [...] Read more.
Robust optimization over time (ROOT) is a relatively recent topic in the field of dynamic evolutionary optimization (EDO). The goal of ROOT problems is to find the optimal solution for several environments at the same time. Although significant contributions to ROOT have been published in the past, it is not clear to what extent progress has been made in terms of the type of problem addressed. In particular, we believe that there is confusion regarding what it actually means to solve a ROOT problem. To overcome these limitations, the objective of this paper is twofold. On the one hand, to provide a characterization framework of ROOT problems in terms of their most relevant features, and on the other hand, to organize existing contributions according to it. As a result, from an initial set of 186 studies, the characterization framework was applied to 35 of them, allowing to identification of some important gaps and proposing new research opportunities. We have also experimentally addressed the effect of available information on ROOT problems, concluding that there is indeed a significant impact on the performance of the algorithm and that the proposed classification is appropriate to characterize the complexity of ROOT problems. To help identify further research opportunities, we have implemented an interactive dashboard with the results of the review conducted, which is available online. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

20 pages, 1232 KiB  
Review
Requirements and Trade-Offs of Compression Techniques in Key–Value Stores: A Survey
by Charles Jaranilla and Jongmoo Choi
Electronics 2023, 12(20), 4280; https://doi.org/10.3390/electronics12204280 - 16 Oct 2023
Cited by 1 | Viewed by 1143
Abstract
The prevalence of big data has caused a notable surge in both the diversity and magnitude of data. Consequently, this has prompted the emergence and advancement of two distinct technologies: unstructured data management and data volume reduction. Key–value stores, such as Google’s LevelDB [...] Read more.
The prevalence of big data has caused a notable surge in both the diversity and magnitude of data. Consequently, this has prompted the emergence and advancement of two distinct technologies: unstructured data management and data volume reduction. Key–value stores, such as Google’s LevelDB and Meta’s RocksDB, have emerged as a popular solution for managing unstructured data due to their ability to handle diverse data types with a simple key–value abstraction. Simultaneously, a multitude of data management tools have actively adopted compression techniques, such as Snappy and Zstd, to effectively reduce data volume. The objective of this study is to explore how these two technologies influence each other. For this purpose, we first examine a classification of compression techniques and discuss their strength and weakness, especially those adopted by modern key–value stores. We also investigate the internal structures and operations, such as batch writing and compaction, in order to grasp the characteristics of key–value stores. Then, we quantitatively evaluate the compression ratio and performance using RocksDB under diverse compression techniques, block sizes, value sizes, and workloads. Our evaluation shows that compression not only saves storage space but also decreases compaction overhead. It also reveals that compression techniques have their inherent trade-offs, meaning that some provide a better compression ratio, while others yield better compression performance. Based on our evaluation, a number of potential avenues for further research have been identified. These include the exploration of a compression-aware compaction mechanism, selective compression, and revisiting compression granularity. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

17 pages, 1329 KiB  
Review
A Study of Optimization in Deep Neural Networks for Regression
by Chieh-Huang Chen, Jung-Pin Lai, Yu-Ming Chang, Chi-Ju Lai and Ping-Feng Pai
Electronics 2023, 12(14), 3071; https://doi.org/10.3390/electronics12143071 - 14 Jul 2023
Cited by 1 | Viewed by 1717
Abstract
Due to rapid development in information technology in both hardware and software, deep neural networks for regression have become widely used in many fields. The optimization of deep neural networks for regression (DNNR), including selections of data preprocessing, network architectures, optimizers, and hyperparameters, [...] Read more.
Due to rapid development in information technology in both hardware and software, deep neural networks for regression have become widely used in many fields. The optimization of deep neural networks for regression (DNNR), including selections of data preprocessing, network architectures, optimizers, and hyperparameters, greatly influence the performance of regression tasks. Thus, this study aimed to collect and analyze the recent literature surrounding DNNR from the aspect of optimization. In addition, various platforms used for conducting DNNR models were investigated. This study has a number of contributions. First, it provides sections for the optimization of DNNR models. Then, elements of the optimization of each section are listed and analyzed. Furthermore, this study delivers insights and critical issues related to DNNR optimization. Optimizing elements of sections simultaneously instead of individually or sequentially could improve the performance of DNNR models. Finally, possible and potential directions for future study are provided. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

43 pages, 2435 KiB  
Review
Advancements and Challenges in Machine Learning: A Comprehensive Review of Models, Libraries, Applications, and Algorithms
by Shahid Tufail, Hugo Riggs, Mohd Tariq and Arif I. Sarwat
Electronics 2023, 12(8), 1789; https://doi.org/10.3390/electronics12081789 - 10 Apr 2023
Cited by 19 | Viewed by 10269
Abstract
In the current world of the Internet of Things, cyberspace, mobile devices, businesses, social media platforms, healthcare systems, etc., there is a lot of data online today. Machine learning (ML) is something we need to understand to do smart analyses of these data [...] Read more.
In the current world of the Internet of Things, cyberspace, mobile devices, businesses, social media platforms, healthcare systems, etc., there is a lot of data online today. Machine learning (ML) is something we need to understand to do smart analyses of these data and make smart, automated applications that use them. There are many different kinds of machine learning algorithms. The most well-known ones are supervised, unsupervised, semi-supervised, and reinforcement learning. This article goes over all the different kinds of machine-learning problems and the machine-learning algorithms that are used to solve them. The main thing this study adds is a better understanding of the theory behind many machine learning methods and how they can be used in the real world, such as in energy, healthcare, finance, autonomous driving, e-commerce, and many more fields. This article is meant to be a go-to resource for academic researchers, data scientists, and machine learning engineers when it comes to making decisions about a wide range of data and methods to start extracting information from the data and figuring out what kind of machine learning algorithm will work best for their problem and what results they can expect. Additionally, this article presents the major challenges in building machine learning models and explores the research gaps in this area. In this article, we also provided a brief overview of data protection laws and their provisions in different countries. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

29 pages, 1160 KiB  
Review
The Influence of Emerging Technologies on Distance Education
by Magdalena Garlinska, Magdalena Osial, Klaudia Proniewska and Agnieszka Pregowska
Electronics 2023, 12(7), 1550; https://doi.org/10.3390/electronics12071550 - 25 Mar 2023
Cited by 12 | Viewed by 11151
Abstract
Recently, during the COVID-19 pandemic, distance education became mainstream. Many students were not prepared for this situation—they lacked equipment or were not even connected to the Internet. Schools and government institutions had to react quickly to allow students to learn remotely. They had [...] Read more.
Recently, during the COVID-19 pandemic, distance education became mainstream. Many students were not prepared for this situation—they lacked equipment or were not even connected to the Internet. Schools and government institutions had to react quickly to allow students to learn remotely. They had to provide students with equipment (e.g., computers, tablets, and goggles) but also provide them with access to the Internet and other necessary tools. On the other hand, teachers were trying to adopt new technologies in the teaching process to enable more interactivity, mitigate feelings of isolation and disconnection, and enhance student engagement. New technologies, including Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), Extended Reality (XR, so-called Metaverse), Big Data, Blockchain, and Free Space Optics (FSO) changed learning, teaching, and assessing. Despite that, some tools were implemented fast, and the COVID-19 pandemic was the trigger for this process; most of these technologies will be used further, even in classroom teaching in both schools and universities. This paper presents a concise review of the emerging technologies applied in distance education. The main emphasis was placed on their influence on the efficiency of the learning process and their psychological impact on users. It turned out that both students and teachers were satisfied with remote learning, while in the case of undergraduate children and high-school students, parents very often expressed their dissatisfaction. The limitation of the availability of remote learning is related to access to stable Internet and computer equipment, which turned out to be a rarity. In the current social context, the obtained results provided valuable insights into factors affecting the acceptance and emerging technologies applied in distance education. Finally, this paper suggests a research direction for the development of effective remote learning techniques. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

21 pages, 528 KiB  
Review
A Survey of Recent Advances in Quantum Generative Adversarial Networks
by Tuan A. Ngo, Tuyen Nguyen and Truong Cong Thang
Electronics 2023, 12(4), 856; https://doi.org/10.3390/electronics12040856 - 08 Feb 2023
Cited by 6 | Viewed by 4391
Abstract
Quantum mechanics studies nature and its behavior at the scale of atoms and subatomic particles. By applying quantum mechanics, a lot of problems can be solved in a more convenient way thanks to its special quantum properties, such as superposition and entanglement. In [...] Read more.
Quantum mechanics studies nature and its behavior at the scale of atoms and subatomic particles. By applying quantum mechanics, a lot of problems can be solved in a more convenient way thanks to its special quantum properties, such as superposition and entanglement. In the current noisy intermediate-scale quantum era, quantum mechanics finds its use in various fields of life. Following this trend, researchers seek to augment machine learning in a quantum way. The generative adversarial network (GAN), an important machine learning invention that excellently solves generative tasks, has also been extended with quantum versions. Since the first publication of a quantum GAN (QuGAN) in 2018, many QuGAN proposals have been suggested. A QuGAN may have a fully quantum or a hybrid quantum–classical architecture, which may need additional data processing in the quantum–classical interface. Similarly to classical GANs, QuGANs are trained using a loss function in the form of max likelihood, Wasserstein distance, or total variation. The gradients of the loss function can be calculated by applying the parameter-shift method or a linear combination of unitaries in order to update the parameters of the networks. In this paper, we review recent advances in quantum GANs. We discuss the structures, optimization, and network evaluation strategies of QuGANs. Different variants of quantum GANs are presented in detail. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

15 pages, 2977 KiB  
Review
Use of Machine Learning in Air Pollution Research: A Bibliographic Perspective
by Shikha Jain, Navneet Kaur, Sahil Verma, Kavita, A. S. M. Sanwar Hosen and Satbir S Sehgal
Electronics 2022, 11(21), 3621; https://doi.org/10.3390/electronics11213621 - 06 Nov 2022
Cited by 4 | Viewed by 2747
Abstract
This research is an attempt to examine the recent status and development of scientific studies on the use of machine learning algorithms to model air pollution challenges. This study uses the Web of Science database as a primary search engine and covers over [...] Read more.
This research is an attempt to examine the recent status and development of scientific studies on the use of machine learning algorithms to model air pollution challenges. This study uses the Web of Science database as a primary search engine and covers over 900 highly peer-reviewed articles in the period 1990–2022. Papers published on these topics were evaluated using the VOSViewer and biblioshiny software to identify and visualize significant authors, key trends, nations, research publications, and journals working on these issues. The findings show that research grew exponentially after 2012. Based on the survey, “particulate matter” is the highly occurring keyword, followed by “prediction”. Papers published by Chinese researchers have garnered the most citations (2421), followed by papers published in the United States of America (2256), and England (722). This study assists scholars, professionals, and global policymakers in understanding the current status of the research contribution on “air pollution and machine learning” as well as identifying the relevant areas for future research. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

23 pages, 2385 KiB  
Review
A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration
by Deepak Ghimire, Dayoung Kil and Seong-heum Kim
Electronics 2022, 11(6), 945; https://doi.org/10.3390/electronics11060945 - 18 Mar 2022
Cited by 69 | Viewed by 10984
Abstract
Over the past decade, deep-learning-based representations have demonstrated remarkable performance in academia and industry. The learning capability of convolutional neural networks (CNNs) originates from a combination of various feature extraction layers that fully utilize a large amount of data. However, they often require [...] Read more.
Over the past decade, deep-learning-based representations have demonstrated remarkable performance in academia and industry. The learning capability of convolutional neural networks (CNNs) originates from a combination of various feature extraction layers that fully utilize a large amount of data. However, they often require substantial computation and memory resources while replacing traditional hand-engineered features in existing systems. In this review, to improve the efficiency of deep learning research, we focus on three aspects: quantized/binarized models, optimized architectures, and resource-constrained systems. Recent advances in light-weight deep learning models and network architecture search (NAS) algorithms are reviewed, starting with simplified layers and efficient convolution and including new architectural design and optimization. In addition, several practical applications of efficient CNNs have been investigated using various types of hardware architectures and platforms. Full article
(This article belongs to the Special Issue Feature Papers in Computer Science & Engineering)
Show Figures

Figure 1

Back to TopTop