Next Issue
Volume 12, March
Previous Issue
Volume 12, January
 
 

Computers, Volume 12, Issue 2 (February 2023) – 25 articles

Cover Story (view full-size image): Modern Artificial-Intelligence-based DDoS detection mechanisms have become increasingly popular and continue to show remarkable performance in detecting attacks, but due to the black-box nature of these systems, they are not widely deployed in operational environments. The proposed research distinguishes between attack and benign traffic flows using an advanced anomaly detection method. Moreover, it chooses the most influential features for each anomalous instance with influence weight using Explainable Artificial Intelligence (XAI). After that, we created a list of the most informative DDoS attack detection features and customized the threshold for each feature value to explain detected anomalies. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 3963 KiB  
Article
Healthcare-Chain: Blockchain-Enabled Decentralized Trustworthy System in Healthcare Management Industry 4.0 with Cyber Safeguard
by Md. Shohidul Islam, Mohamed Ariff Bin Ameedeen, Md. Arafatur Rahman, Husnul Ajra and Zahian Binti Ismail
Computers 2023, 12(2), 46; https://doi.org/10.3390/computers12020046 - 20 Feb 2023
Cited by 7 | Viewed by 2302
Abstract
The pervasiveness of healthcare data to create better healthcare facilities and opportunities is one of the most-imperative parts of human life that offers radical advancements in healthcare services practiced through the blockchain-based management, analysis, storage, and sharing of health-related big data. Researchers can [...] Read more.
The pervasiveness of healthcare data to create better healthcare facilities and opportunities is one of the most-imperative parts of human life that offers radical advancements in healthcare services practiced through the blockchain-based management, analysis, storage, and sharing of health-related big data. Researchers can accelerate the challenges of developing a secure, scalable, and accessible dynamic healthcare infrastructure by the extensive data exchange required through individual microservices of blockchain-based privacy-preserving health data management ledgers in Healthcare Industry 4.0. Conducting secure and privacy-preserving platforms through primitive cryptographic algorithms is risky and can be a serious concern as the need to authenticate and store sensitive health data automatically are increasingly high. To achieve interoperability, security, efficiency, scalability, availability, and accountability among healthcare providers in heterogeneous networks, this paper proposes a blockchain-enabled decentralized, trustworthy privacy-preserving platform in the healthcare industry. In the healthcare-chain system, blockchain provides an appreciated secure environment for the privacy-preserving health data management ledger through hash processing, which updates high data security, storage immutability, and authentication functionality with an integrated attribute signature in accessing prescribed health block data. This article describes a new secure data retention design, prescribed evidence collection, and evaluation mechanism with integrity–confidentiality–availability to enforce the data access control policies for transactions of healthcare microservices. This scheme revealed the optimal performance in terms of mining health data size, average response time, transaction latency, and throughput for secured block transactions in blockchain networks. Full article
(This article belongs to the Section Blockchain Infrastructures and Enabled Applications)
Show Figures

Figure 1

15 pages, 794 KiB  
Article
A Long Short-Term Memory Network Using Resting-State Electroencephalogram to Predict Outcomes Following Moderate Traumatic Brain Injury
by Nor Safira Elaina Mohd Noor, Haidi Ibrahim, Chi Qin Lai and Jafri Malin Abdullah
Computers 2023, 12(2), 45; https://doi.org/10.3390/computers12020045 - 20 Feb 2023
Cited by 2 | Viewed by 1599
Abstract
Although traumatic brain injury (TBI) is a global public health issue, not all injuries necessitate additional hospitalisation. Thinking, memory, attention, personality, and movement can all be negatively impacted by TBI. However, only a small proportion of nonsevere TBIs necessitate prolonged observation. Clinicians would [...] Read more.
Although traumatic brain injury (TBI) is a global public health issue, not all injuries necessitate additional hospitalisation. Thinking, memory, attention, personality, and movement can all be negatively impacted by TBI. However, only a small proportion of nonsevere TBIs necessitate prolonged observation. Clinicians would benefit from an electroencephalography (EEG)-based computational intelligence model for outcome prediction by having access to an evidence-based analysis that would allow them to securely discharge patients who are at minimal risk of TBI-related mortality. Despite the increasing popularity of EEG-based deep learning research to create predictive models with breakthrough performance, particularly in epilepsy prediction, its use in clinical decision making for the diagnosis and prognosis of TBI has not been as widely exploited. Therefore, utilising 60s segments of unprocessed resting-state EEG data as input, we suggest a long short-term memory (LSTM) network that can distinguish between improved and unimproved outcomes in moderate TBI patients. Complex feature extraction and selection are avoided in this architecture. The experimental results show that, with a classification accuracy of 87.50 ± 0.05%, the proposed prognostic model outperforms three related works. The results suggest that the proposed methodology is an efficient and reliable strategy to assist clinicians in creating an automated tool for predicting treatment outcomes from EEG signals. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

15 pages, 2582 KiB  
Article
A Performance Study of CNN Architectures for the Autonomous Detection of COVID-19 Symptoms Using Cough and Breathing
by Meysam Effati and Goldie Nejat
Computers 2023, 12(2), 44; https://doi.org/10.3390/computers12020044 - 17 Feb 2023
Cited by 4 | Viewed by 1592
Abstract
Deep learning (DL) methods have the potential to be used for detecting COVID-19 symptoms. However, the rationale for which DL method to use and which symptoms to detect has not yet been explored. In this paper, we present the first performance study which [...] Read more.
Deep learning (DL) methods have the potential to be used for detecting COVID-19 symptoms. However, the rationale for which DL method to use and which symptoms to detect has not yet been explored. In this paper, we present the first performance study which compares various convolutional neural network (CNN) architectures for the autonomous preliminary COVID-19 detection of cough and/or breathing symptoms. We compare and analyze residual networks (ResNets), visual geometry Groups (VGGs), Alex neural networks (AlexNet), densely connected networks (DenseNet), squeeze neural networks (SqueezeNet), and COVID-19 identification ResNet (CIdeR) architectures to investigate their classification performance. We uniquely train and validate both unimodal and multimodal CNN architectures using the EPFL and Cambridge datasets. Performance comparison across all modes and datasets showed that the VGG19 and DenseNet-201 achieved the highest unimodal and multimodal classification performance. VGG19 and DensNet-201 had high F1 scores (0.94 and 0.92) for unimodal cough classification on the Cambridge dataset, compared to the next highest F1 score for ResNet (0.79), with comparable F1 scores to ResNet for the larger EPFL cough dataset. They also had consistently high accuracy, recall, and precision. For multimodal detection, VGG19 and DenseNet-201 had the highest F1 scores (0.91) compared to the other CNN structures (≤0.90), with VGG19 also having the highest accuracy and recall. Our investigation provides the foundation needed to select the appropriate deep CNN method to utilize for non-contact early COVID-19 detection. Full article
(This article belongs to the Special Issue e-health Pervasive Wireless Applications and Services (e-HPWAS'22))
Show Figures

Figure 1

23 pages, 6706 KiB  
Article
Channel Intensity and Edge-Based Estimation of Heart Rate via Smartphone Recordings
by Anusha Krishnamoorthy, G. Muralidhar Bairy, Nandish Siddeshappa, Hilda Mayrose, Niranjana Sampathila and Krishnaraj Chadaga
Computers 2023, 12(2), 43; https://doi.org/10.3390/computers12020043 - 17 Feb 2023
Cited by 1 | Viewed by 1241
Abstract
Smartphones, today, come equipped with a wide variety of sensors and high-speed processors that can capture, process, store, and communicate different types of data. Coupled with their ubiquity in recent years, these devices show potential as practical and portable healthcare monitors that are [...] Read more.
Smartphones, today, come equipped with a wide variety of sensors and high-speed processors that can capture, process, store, and communicate different types of data. Coupled with their ubiquity in recent years, these devices show potential as practical and portable healthcare monitors that are both cost-effective and accessible. To this end, this study focuses on examining the feasibility of smartphones in estimating the heart rate (HR), using video recordings of the users’ fingerprints. The proposed methodology involves two-stage processing that combines channel-intensity-based approaches (Channel-Intensity mode/Counter method) and a novel technique that relies on the spatial and temporal position of the recorded fingerprint edges (Edge-Detection mode). The dataset used here included 32 fingerprint video recordings taken from 6 subjects, using the rear camera of 2 smartphone models. Each video clip was first validated to determine whether it was suitable for Channel-Intensity mode or Edge-Detection mode, followed by further processing and heart rate estimation in the selected mode. The relative accuracy for recordings via the Edge-Detection mode was 93.04%, with a standard error of estimates (SEE) of 6.55 and Pearson’s correlation r > 0.91, while the Channel-Intensity mode showed a relative accuracy of 92.75%, with an SEE of 5.95 and a Pearson’s correlation r > 0.95. Further statistical analysis was also carried out using Pearson’s correlation test and the Bland–Altman method to verify the statistical significance of the results. The results thus show that the proposed methodology, through smartphones, is a potential alternative to existing technologies for monitoring a person’s heart rate. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain)
Show Figures

Graphical abstract

16 pages, 2091 KiB  
Article
An ML-Powered Risk Assessment System for Predicting Prospective Mass Shooting
by Ahmed Abdelmoamen Ahmed and Nneoma Okoroafor
Computers 2023, 12(2), 42; https://doi.org/10.3390/computers12020042 - 17 Feb 2023
Viewed by 3673
Abstract
The United States has had more mass shooting incidents than any other country. It is reported that more than 1800 incidents occurred in the US during the past three years. Mass shooters often display warning signs before committing crimes, such as childhood traumas, [...] Read more.
The United States has had more mass shooting incidents than any other country. It is reported that more than 1800 incidents occurred in the US during the past three years. Mass shooters often display warning signs before committing crimes, such as childhood traumas, domestic violence, firearms access, and aggressive social media posts. With the advancement of machine learning (ML), it is more possible than ever to predict mass shootings before they occur by studying the behavior of prospective mass shooters. This paper presents an ML-based system that uses various unsupervised ML models to warn about a balanced progressive tendency of a person to commit a mass shooting. Our system used two models, namely local outlier factor and K-means clustering, to learn both the psychological factors and social media activities of previous shooters to provide a probabilistic similarity of a new observation to an existing shooter. The developed system can show the similarity between a new record for a prospective shooter and one or more records from our dataset via a GUI-friendly interface. It enables users to select some social and criminal observations about the prospective shooter. Then, the webpage creates a new record, classifies it, and displays the similarity results. Furthermore, we developed a feed-in module, which allows new observations to be added to our dataset and retrains the ML models. Finally, we evaluated our system using various performance metrics. Full article
Show Figures

Figure 1

37 pages, 22759 KiB  
Article
Benefits of Using Network Modeling Platforms When Studying IP Networks and Traffic Characterization
by Ivan Nedyalkov
Computers 2023, 12(2), 41; https://doi.org/10.3390/computers12020041 - 16 Feb 2023
Viewed by 1164
Abstract
This article addresses the benefits of using IP network modeling platforms to study IP networks. For the purposes of this study, several models of IP networks were created, through which various hypotheses were studied. Additionally, different operational variants of the modeled IP networks [...] Read more.
This article addresses the benefits of using IP network modeling platforms to study IP networks. For the purposes of this study, several models of IP networks were created, through which various hypotheses were studied. Additionally, different operational variants of the modeled IP networks were created. The use of the GNS3 platform was proposed, as well as several tools for monitoring the processes in IP networks. The application of IP network modeling platforms to study power electronic devices was also addressed. IP network modeling platforms greatly facilitate both the process of studying IP networks and the process of training professionals to design, install, and maintain different types of IP network. Thanks to the GNS3 IP network modeling platform, it was possible to implement different models of IP networks with different functionalities. It was very easy to determine the answers to posed hypotheses/questions through the capabilities of the IP network modeling platforms. The questions posed by the hypotheses addressed in this paper were answered thanks to the results obtained from the research carried out with the IP network modeling platform GNS3. The present study confirmed that the use of these platforms, in particular the GNS3 platform, for the modeling of IP networks is an excellent substitute for expensive network equipment, and the IP network models created in the platform performed almost like networks made of real devices. Full article
Show Figures

Figure 1

18 pages, 2231 KiB  
Article
Symbiotic Combination of a Bayesian Network and Fuzzy Logic to Quantify the QoS in a VANET: Application in Logistic 4.0
by Hafida Khalfaoui, Abdellah Azmani, Abderrazak Farchane and Said Safi
Computers 2023, 12(2), 40; https://doi.org/10.3390/computers12020040 - 14 Feb 2023
Cited by 2 | Viewed by 1519
Abstract
Intelligent transportation systems use new technologies to improve road safety. In them, vehicles have been equipped with wireless communication systems called on-board units (OBUs) to be able to communicate with each other. This type of wireless network refers to vehicular ad hoc networks [...] Read more.
Intelligent transportation systems use new technologies to improve road safety. In them, vehicles have been equipped with wireless communication systems called on-board units (OBUs) to be able to communicate with each other. This type of wireless network refers to vehicular ad hoc networks (VANET). The primary problem in a VANET is the quality of service (QoS) because a small problem in the services can extremely damage both human lives and the economy. From this perspective, this article makes a contribution within the framework of a new conceptual project called the Smart Digital Logistic Services Provider (Smart DLSP). This is intended to give freight vehicles more intelligence in the service of logistics on a global scale. This article proposes a model that combines two approaches—a Bayesian network and fuzzy logic for calculating the QoS in a VANET as a function of multiple criteria—and provides a database that helps determine the originality of the risk of degrading the QoS in the network. The outcome of this approach was employed in an event tree analysis to assess the impact of the system’s security mechanisms. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

38 pages, 9944 KiB  
Article
Modeling Collaborative Behaviors in Energy Ecosystems
by Kankam O. Adu-Kankam and Luis M. Camarinha-Matos
Computers 2023, 12(2), 39; https://doi.org/10.3390/computers12020039 - 13 Feb 2023
Cited by 2 | Viewed by 1397
Abstract
The notions of a collaborative virtual power plant ecosystem (CVPP-E) and a cognitive household digital twin (CHDT) have been proposed as contributions to the efficient organization and management of households within renewable energy communities (RECs). CHDTs can be modeled as software agents that [...] Read more.
The notions of a collaborative virtual power plant ecosystem (CVPP-E) and a cognitive household digital twin (CHDT) have been proposed as contributions to the efficient organization and management of households within renewable energy communities (RECs). CHDTs can be modeled as software agents that are designed to possess some cognitive capabilities, enabling them to make autonomous decisions on behalf of their human owners based on the value system of their physical twin. Due to their cognitive and decision-making capabilities, these agents can exhibit some behavioral attributes, such as engaging in diverse collaborative actions aimed at achieving some common goals. These behavioral attributes can be directed to the promotion of sustainable energy consumption in the ecosystem. Along this line, this work demonstrates various collaborative practices that include: (1) collaborative roles played by the CVPP manager such as (a) opportunity seeking and goal formulation, (b) goal proposition/invitation to form a coalition or virtual organization, and (c) formation and dissolution of coalitions; and (2) collaborative roles played by CHDTs which include (a) acceptance or decline of an invitation based on (i) delegation/non-delegation and (ii) value system compatibility/non-compatibility, and (b) the sharing of common resources. This study adopts a simulation technique that involves the integration of multiple simulation methods such as system dynamics, agent-based, and discrete event simulation techniques in a single simulation environment. The outcome of this study confirms the potential of adding cognitive capabilities to CHDTs and further shows that these agents could exhibit certain collaborative attributes, enabling them to become suitable as rational decision-making agents in households. Full article
(This article belongs to the Special Issue Computing, Electrical and Industrial Systems 2022)
Show Figures

Figure 1

25 pages, 652 KiB  
Article
The Impact of COVID-19 on Purchase Behavior Changes in Smart Regions
by Mária Pomffyová and Lenka Veselovská
Computers 2023, 12(2), 38; https://doi.org/10.3390/computers12020038 - 11 Feb 2023
Cited by 1 | Viewed by 2270
Abstract
The COVID-19 pandemic has changed consumer behavior due to various restrictions and increased degrees of ICT use. By establishing and verifying the validity of the hypotheses, we aim to compare intensities of mutual correlations that indicate changes in consumer behavior depending on the [...] Read more.
The COVID-19 pandemic has changed consumer behavior due to various restrictions and increased degrees of ICT use. By establishing and verifying the validity of the hypotheses, we aim to compare intensities of mutual correlations that indicate changes in consumer behavior depending on the degree and nature of changes in selected socio-demographic or socio-economic factors. The statistical evaluation of the answers obtained in surveys of representative samples of 987 respondents from the Slovak Republic (implemented in 2021 about the dual quality of goods sold in the EU) and also the answers of 347 respondents (in 2022 aimed at changes in Slovak consumer behavior) will be carried out with multivariate analyses using the SPSS program. The outputs indicated that during self-isolation periods, Slovak consumers bought more or the same amount as before the pandemic; shopping habits were mainly changed by women and groups with lower household income. Test subjects preferred the quality products and products posing the least amount of risk to health. All consumers intend to continue to shop through e-commerce platforms where they prefer a more personal experience (through social media or YouTube). Low-income people’s budgets are threatened by cheap products and poor distribution of spending, especially among young people. We recommend simplifying personalized visualized sales and education content and e-methods of information sharing also in order to make them accessible to digitally disadvantaged groups (according to income, age, education, etc.). The use of blockchains increases transparency of production and sales value chains, reducing the occurrence of unfair practices, and promoting participatory public dialogue. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2022)
Show Figures

Figure 1

15 pages, 3255 KiB  
Review
Artificial Intelligence and Sentiment Analysis: A Review in Competitive Research
by Hamed Taherdoost and Mitra Madanchian
Computers 2023, 12(2), 37; https://doi.org/10.3390/computers12020037 - 07 Feb 2023
Cited by 24 | Viewed by 22064
Abstract
As part of a business strategy, effective competitive research helps businesses outperform their competitors and attract loyal consumers. To perform competitive research, sentiment analysis may be used to assess interest in certain themes, uncover market conditions, and study competitors. Artificial intelligence (AI) has [...] Read more.
As part of a business strategy, effective competitive research helps businesses outperform their competitors and attract loyal consumers. To perform competitive research, sentiment analysis may be used to assess interest in certain themes, uncover market conditions, and study competitors. Artificial intelligence (AI) has improved the performance of multiple areas, particularly sentiment analysis. Using AI, sentiment analysis is the process of recognizing emotions expressed in text. AI comprehends the tone of a statement, as opposed to merely recognizing whether particular words within a group of text have a negative or positive connotation. This article reviews papers (2012–2022) that discuss how competitive market research identifies and compares major market measurements that help distinguish the services and goods of the competitors. AI-powered sentiment analysis can be used to learn what the competitors’ customers think of them across all aspects of the businesses. Full article
Show Figures

Figure 1

21 pages, 6354 KiB  
Article
Monkeypox Outbreak Analysis: An Extensive Study Using Machine Learning Models and Time Series Analysis
by Ishaani Priyadarshini, Pinaki Mohanty, Raghvendra Kumar and David Taniar
Computers 2023, 12(2), 36; https://doi.org/10.3390/computers12020036 - 07 Feb 2023
Cited by 8 | Viewed by 2961
Abstract
The sudden unexpected rise in monkeypox cases worldwide has become an increasing concern. The zoonotic disease characterized by smallpox-like symptoms has already spread to nearly twenty countries and several continents and is labeled a potential pandemic by experts. monkeypox infections do not have [...] Read more.
The sudden unexpected rise in monkeypox cases worldwide has become an increasing concern. The zoonotic disease characterized by smallpox-like symptoms has already spread to nearly twenty countries and several continents and is labeled a potential pandemic by experts. monkeypox infections do not have specific treatments. However, since smallpox viruses are similar to monkeypox viruses administering antiviral drugs and vaccines against smallpox could be used to prevent and treat monkeypox. Since the disease is becoming a global concern, it is necessary to analyze its impact and population health. Analyzing key outcomes, such as the number of people infected, deaths, medical visits, hospitalizations, etc., could play a significant role in preventing the spread. In this study, we analyze the spread of the monkeypox virus across different countries using machine learning techniques such as linear regression (LR), decision trees (DT), random forests (RF), elastic net regression (EN), artificial neural networks (ANN), and convolutional neural networks (CNN). Our study shows that CNNs perform the best, and the performance of these models is evaluated using statistical parameters such as mean absolute error (MAE), mean squared error (MSE), mean absolute percentage error (MAPE), and R-squared error (R2). The study also presents a time-series-based analysis using autoregressive integrated moving averages (ARIMA) and seasonal auto-regressive integrated moving averages (SARIMA) models for measuring the events over time. Comprehending the spread can lead to understanding the risk, which may be used to prevent further spread and may enable timely and effective treatment. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2022)
Show Figures

Figure 1

17 pages, 2684 KiB  
Article
Energy-Efficient Cluster Head Selection in Wireless Sensor Networks Using an Improved Grey Wolf Optimization Algorithm
by Mandli Rami Reddy, M. L. Ravi Chandra, P. Venkatramana and Ravilla Dilli
Computers 2023, 12(2), 35; https://doi.org/10.3390/computers12020035 - 06 Feb 2023
Cited by 21 | Viewed by 3209
Abstract
The internet of things (IoT) and industrial IoT (IIoT) play a major role in today’s world of intelligent networks, and they essentially use a wireless sensor network (WSN) as a perception layer to collect the intended data. This data is processed as information [...] Read more.
The internet of things (IoT) and industrial IoT (IIoT) play a major role in today’s world of intelligent networks, and they essentially use a wireless sensor network (WSN) as a perception layer to collect the intended data. This data is processed as information and send to cloud servers through a base station, the challenge here is the consumption of minimum energy for processing and communication. The dynamic formation of cluster heads and energy aware clustering schemes help in improving the lifetime of WSNs. In recent years, grey wolf optimization (GWO) became the most popular feature selection optimizing, swarm intelligent, and robust metaheuristics algorithm that gives competitive results with impressive characteristics. In spite of several studies in the literature to enhance the performance of the GWO algorithm, there is a need for further improvements in terms of feature selection, accuracy, and execution time. In this paper, we have proposed an energy-efficient cluster head selection using an improved version of the GWO (EECHIGWO) algorithm to alleviate the imbalance between exploitation and exploration, lack of population diversity, and premature convergence of the basic GWO algorithm. The primary goal of this paper is to enhance the energy efficiency, average throughput, network stability, and the network lifetime in WSNs with an optimal selection of cluster heads using the EECHIGWO algorithm. It considers sink distance, residual energy, cluster head balancing factor, and average intra-cluster distance as the parameters in selecting the cluster head. The proposed EECHIGWO-based clustering protocol has been tested in terms of the number of dead nodes, energy consumption, number of operating rounds, and the average throughput. The simulation results have confirmed the optimal selection of cluster heads with minimum energy consumption, resolved premature convergence, and enhanced the network lifetime by using minimum energy levels in WSNs. Using the proposed algorithm, there is an improvement in network stability of 169.29%, 19.03%, 253.73%, 307.89%, and 333.51% compared to the SSMOECHS, FGWSTERP, LEACH-PRO, HMGWO, and FIGWO protocols, respectively. Full article
Show Figures

Figure 1

17 pages, 498 KiB  
Article
A Novel Deep Learning-Based Intrusion Detection System for IoT Networks
by Albara Awajan
Computers 2023, 12(2), 34; https://doi.org/10.3390/computers12020034 - 05 Feb 2023
Cited by 24 | Viewed by 7026
Abstract
The impressive growth rate of the Internet of Things (IoT) has drawn the attention of cybercriminals more than ever. The growing number of cyber-attacks on IoT devices and intermediate communication media backs the claim. Attacks on IoT, if they remain undetected for an [...] Read more.
The impressive growth rate of the Internet of Things (IoT) has drawn the attention of cybercriminals more than ever. The growing number of cyber-attacks on IoT devices and intermediate communication media backs the claim. Attacks on IoT, if they remain undetected for an extended period, cause severe service interruption resulting in financial loss. It also imposes the threat of identity protection. Detecting intrusion on IoT devices in real-time is essential to make IoT-enabled services reliable, secure, and profitable. This paper presents a novel Deep Learning (DL)-based intrusion detection system for IoT devices. This intelligent system uses a four-layer deep Fully Connected (FC) network architecture to detect malicious traffic that may initiate attacks on connected IoT devices. The proposed system has been developed as a communication protocol-independent system to reduce deployment complexities. The proposed system demonstrates reliable performance for simulated and real intrusions during the experimental performance analysis. It detects the Blackhole, Distributed Denial of Service, Opportunistic Service, Sinkhole, and Workhole attacks with an average accuracy of 93.74%. The proposed intrusion detection system’s precision, recall, and F1-score are 93.71%, 93.82%, and 93.47%, respectively, on average. This innovative deep learning-based IDS maintains a 93.21% average detection rate which is satisfactory for improving the security of IoT networks. Full article
(This article belongs to the Special Issue Big Data Analytic for Cyber Crime Investigation and Prevention 2023)
Show Figures

Figure 1

17 pages, 4151 KiB  
Article
Enhancing Carsharing Experiences for Barcelona Citizens with Data Analytics and Intelligent Algorithms
by Erika M. Herrera, Laura Calvet, Elnaz Ghorbani, Javier Panadero and Angel A. Juan
Computers 2023, 12(2), 33; https://doi.org/10.3390/computers12020033 - 05 Feb 2023
Cited by 1 | Viewed by 1599
Abstract
Carsharing practices are spreading across many cities in the world. This paper analyzes real-life data obtained from a private carsharing company operating in the city of Barcelona, Spain. After describing the main trends in the data, machine learning and time-series analysis methods are [...] Read more.
Carsharing practices are spreading across many cities in the world. This paper analyzes real-life data obtained from a private carsharing company operating in the city of Barcelona, Spain. After describing the main trends in the data, machine learning and time-series analysis methods are employed to better understand citizens’ needs and behavior, as well as to make predictions about the evolution of their demand for this service. In addition, an original proposal is made regarding the location of the pick-up points. This proposal is based on a capacitated dispersion algorithm, and aims at balancing two relevant factors, including scattering of pick-up points (so that most users can benefit from the service) and efficiency (so that areas with higher demand are well covered). Our aim is to gain a deeper understanding of citizens’ needs and behavior in relation to carsharing services. The analysis includes three main components: descriptive, predictive, and prescriptive, resulting in customer segmentation and forecast of service demand, as well as original concepts for optimizing parking station location. Full article
(This article belongs to the Special Issue Sensors and Smart Cities 2023)
Show Figures

Figure 1

16 pages, 637 KiB  
Article
Explainable AI-Based DDOS Attack Identification Method for IoT Networks
by Chathuranga Sampath Kalutharage, Xiaodong Liu, Christos Chrysoulas, Nikolaos Pitropakis and Pavlos Papadopoulos
Computers 2023, 12(2), 32; https://doi.org/10.3390/computers12020032 - 03 Feb 2023
Cited by 9 | Viewed by 3578
Abstract
The modern digitized world is mainly dependent on online services. The availability of online systems continues to be seriously challenged by distributed denial of service (DDoS) attacks. The challenge in mitigating attacks is not limited to identifying DDoS attacks when they happen, but [...] Read more.
The modern digitized world is mainly dependent on online services. The availability of online systems continues to be seriously challenged by distributed denial of service (DDoS) attacks. The challenge in mitigating attacks is not limited to identifying DDoS attacks when they happen, but also identifying the streams of attacks. However, existing attack detection methods cannot accurately and efficiently detect DDoS attacks. To this end, we propose an explainable artificial intelligence (XAI)-based novel method to identify DDoS attacks. This method detects abnormal behaviours of network traffic flows by analysing the traffic at the network layer. Moreover, it chooses the most influential features for each anomalous instance with influence weight and then sets a threshold value for each feature. Hence, this DDoS attack detection method defines security policies based on each feature threshold value for application-layer-based, volumetric-based, and transport control protocol (TCP) state-exhaustion-based features. Since the proposed method is based on layer three traffic, it can identify DDoS attacks on both Internet of Things (IoT) and traditional networks. Extensive experiments were performed on the University of Sannio, Benevento Instrution Detection System (USB-IDS) dataset, which consists of different types of DDoS attacks to test the performance of the proposed solution. The results of the comparison show that the proposed method provides greater detection accuracy and attack certainty than the state-of-the-art methods. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

16 pages, 1912 KiB  
Article
Applying Web Augmented Reality to Unexplosive Ordnance Risk Education
by Harith A. Hussein, Qabas A. Hameed, Reem D. Ismael, Mustafa Zuhaer Nayef Al-Dabagh and Moudher Khalid Abdalhammed
Computers 2023, 12(2), 31; https://doi.org/10.3390/computers12020031 - 01 Feb 2023
Viewed by 1463
Abstract
Unexploded Ordnances (UXOs) are considered a global concern and a persistent hazard due to their capability to endanger civilians and the place where they are located, and the probability of remaining active explosives even after decades of ending a conflict. Hence, risk education [...] Read more.
Unexploded Ordnances (UXOs) are considered a global concern and a persistent hazard due to their capability to endanger civilians and the place where they are located, and the probability of remaining active explosives even after decades of ending a conflict. Hence, risk education is crucial for providing individuals with life-saving knowledge on recognizing, avoiding, and reporting UXO threats. The main objective of this study is to develop a web augmented reality (AR) application to investigate the effect of WAR on non-explosive ordnance risk education. Firstly, UXO 3D models are edited and constructed using the Blender 3D computer graphics software. Secondly, the proposed web AR application is developed using MindAR JavaScript-based library. Finally, the web application QR code and UXO Hiro codes are printed on infographics and brochures to be distributed to secondary school students aged 12 to 18 at six public and private schools in Tikrit City, Salah al-Din governorate, Iraq. Survey questions are validated and distributed to be collected from 137 respondents. The present study shows that the proposed web AR application increased respondents’ information in identifying UXO by 54.7%. Approximately 70% of respondents use the Internet for more than 3 h daily. Institutions should use new risk education methods in line with the tremendous technological growth and invest students’ knowledge and time in this field. Better risk education teaching methods can save lives. Full article
(This article belongs to the Special Issue Advances in Augmented and Mixed Reality to the Industry 4.0)
Show Figures

Figure 1

24 pages, 2113 KiB  
Article
Data Analysis Model for the Evaluation of the Factors That Influence the Teaching of University Students
by William Villegas-Ch., Aracely Mera-Navarrete and Joselin García-Ortiz
Computers 2023, 12(2), 30; https://doi.org/10.3390/computers12020030 - 31 Jan 2023
Cited by 1 | Viewed by 1757
Abstract
Currently, the effects of the pandemic caused by the Coronavirus disease discovered in 2019 are the subject of numerous studies by experts in labor, psychological issues, educational issues, etc. The universities, for their continuity, have implemented various technological tools for the development of [...] Read more.
Currently, the effects of the pandemic caused by the Coronavirus disease discovered in 2019 are the subject of numerous studies by experts in labor, psychological issues, educational issues, etc. The universities, for their continuity, have implemented various technological tools for the development of their activities, such as videoconference platforms, learning management systems, etc. This experience has led the educational sector to propose new educational models, such as hybrid education, that focus on the use of information technologies. To carry out its implementation, it is necessary to identify the adaptability of students to a technological environment and what the factors are that influence learning. To do this, this article proposes a data analysis framework that identifies the factors and variables of a hybrid teaching environment. The results obtained allow us to determine the level of influence of educational factors that affect learning by applying data analysis algorithms to profile students through a classification based on their characteristics and improve learning methodologies in these educational models. The updating of educational systems requires a flexible process that is aligned with the needs of the students. With this analysis framework, it is possible to create an educational environment focused on students and allows for efficient change with the granular analysis of the state of the learning. Full article
Show Figures

Figure 1

21 pages, 1592 KiB  
Article
Cognitive Impairment and Dementia Data Model: Quality Evaluation and Improvements
by Dessislava Petrova-Antonova and Sophia Lazarova
Computers 2023, 12(2), 29; https://doi.org/10.3390/computers12020029 - 30 Jan 2023
Viewed by 1470
Abstract
Recently, datasets with various factors and indicators of cognitive diseases have been available for clinical research. Although the transformation of information to a particular data model is straightforward, many challenges arise if data from different repositories have to be integrated. Since each data [...] Read more.
Recently, datasets with various factors and indicators of cognitive diseases have been available for clinical research. Although the transformation of information to a particular data model is straightforward, many challenges arise if data from different repositories have to be integrated. Since each data source keeps entities with different names and relationships at different levels of granularity and format, the information can be partially lost or not properly presented. It is therefore important to have a common data model that provides a unified description of different factors and indicators related to cognitive diseases. Thus, in our previous work, we proposed a hierarchical cognitive impairment and dementia data model that keeps the semantics of the data in a human-readable format and accelerates the interoperability of clinical datasets. It defines data entities, their attributes and relationships related to diagnosis and treatment. This paper extends our previous work by evaluating and improving the data model by adapting the methodology proposed by D. Moody and G. Shanks. The completeness, simplicity, correctness and integrity of the data model are assessed and based on the results a new, improved version of the model is generated. The understandability of the improved model is evaluated using an online questionnaire. Simplicity and integrity are also considered as well as the factors that may influence the flexibility of the data model. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2022)
Show Figures

Figure 1

18 pages, 2864 KiB  
Article
Virtual Local Area Network Performance Improvement Using Ad Hoc Routing Protocols in a Wireless Network
by Shayma Wail Nourildean, Yousra Abd Mohammed and Hussein Ali Attallah
Computers 2023, 12(2), 28; https://doi.org/10.3390/computers12020028 - 28 Jan 2023
Cited by 3 | Viewed by 3082
Abstract
Wireless Communication has become one of the most popular types of communication networks because of the many services it provides; however, it has experienced several challenges in improving network performance. VLAN (Virtual Local Area Network) is a different approach which enables a network [...] Read more.
Wireless Communication has become one of the most popular types of communication networks because of the many services it provides; however, it has experienced several challenges in improving network performance. VLAN (Virtual Local Area Network) is a different approach which enables a network administrator to create a logical network from a physical network. By dividing a large network into smaller networks, VLAN technology improves network efficiency, management, and security. This study includes VLAN for wireless networks with mobile nodes integration. The network protection was improved by separating the connections and grouping them in a way that prevents any party from being able to contact unauthorized stations in another party using VLAN. VLAN demonstrated restricted access to private server data by managing traffic, improving security, and reducing levels of congestion. This paper investigates the virtual local area network in a wireless network with three ad hoc routing protocols in a number of different scenarios, using the Riverbed Modeler simulation, which was used as a simulation program in this study. It was found from the investigation process that adopting VLAN technology could reduce delay and data of the network and considerably lower throughput, which is a major drawback of VLAN. Ad hoc routing algorithms, including AODV (Ad Hoc On-Demand Distance Vector), DSR (Dynamic Source Routing), and OLSR (Optimized Link State Routing) routing protocols, were used to improve the delay and throughput of the network. Routing methods with VLAN were tested across the WLAN to obtain the best throughput gain performance. The findings also revealed that these ad hoc routing protocols improved the Wireless Sensor Network performance as an additional investigation for the improvement of any network’s delay and throughput. Full article
Show Figures

Figure 1

18 pages, 2076 KiB  
Article
Experiments and Evaluation of a Container Migration Data-Auditing System on Edge Computing Environment
by Toshihiro Uchibayashi, Bernady Apduhan, Takuo Suganuma and Masahiro Hiji
Computers 2023, 12(2), 27; https://doi.org/10.3390/computers12020027 - 27 Jan 2023
Cited by 1 | Viewed by 1425
Abstract
With the proliferation of IoT sensors and devices, storing collected data in the cloud has become common. A wide variety of data with different purposes and forms are not directly stored in the cloud but are sent to the cloud via edge servers. [...] Read more.
With the proliferation of IoT sensors and devices, storing collected data in the cloud has become common. A wide variety of data with different purposes and forms are not directly stored in the cloud but are sent to the cloud via edge servers. At the edge server, applications are running in containers and virtual machines to collect data. However, the current deployment and movement mechanisms for containers and virtual machines do not consider any conventions or regulations for the applications and the data it contains. Therefore, it is easy to deploy and migrate containers and virtual machines. However, the problem arises when it is deployed or migrated, which may violate the licensing terms of the contained applications, the rules of the organization, or the laws and regulations of the concerned country. We have already proposed a data-audit control mechanism for the migration of virtual machines. The proposed mechanism successfully controls the unintentional and malicious migration of virtual machines. We expect similar problems with containers to occur as the number of edge servers increases. Therefore, we propose a policy-based data-audit control system for container migration. The proposed system was verified in the implemented edge computing environment and the results showed that adding the proposed data-audit control mechanism had a minimal impact on migration time and that the system was practical enough. In the future, we intend to conduct verification not in a very compact and short-range environment such as this one but on an existing wide-area network. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2022)
Show Figures

Figure 1

13 pages, 2493 KiB  
Article
Automatic Evaluation of Neural Network Training Results
by Roman Barinov, Vasiliy Gai, George Kuznetsov and Vladimir Golubenko
Computers 2023, 12(2), 26; https://doi.org/10.3390/computers12020026 - 20 Jan 2023
Cited by 1 | Viewed by 2164
Abstract
This article is dedicated to solving the problem of an insufficient degree of automation of artificial neural network training. Despite the availability of a large number of libraries for training neural networks, machine learning engineers often have to manually control the training process [...] Read more.
This article is dedicated to solving the problem of an insufficient degree of automation of artificial neural network training. Despite the availability of a large number of libraries for training neural networks, machine learning engineers often have to manually control the training process to detect overfitting or underfitting. This article considers the task of automatically estimating neural network training results through an analysis of learning curves. Such analysis allows one to determine one of three possible states of the training process: overfitting, underfitting, and optimal training. We propose several algorithms for extracting feature descriptions from learning curves using mathematical statistics. Further state classification is performed using classical machine learning models. The proposed automatic estimation model serves to improve the degree of automation of neural network training and interpretation of its results, while also taking a step toward constructing self-training models. In most cases when the training process of neural networks leads to overfitting, the developed model determines its onset ahead of the early stopping method by 3–5 epochs. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI)
Show Figures

Figure 1

6 pages, 181 KiB  
Editorial
Acknowledgment to the Reviewers of Computers in 2022
by Computers Editorial Office
Computers 2023, 12(2), 25; https://doi.org/10.3390/computers12020025 - 19 Jan 2023
Viewed by 768
Abstract
High-quality academic publishing is built on rigorous peer review [...] Full article
18 pages, 963 KiB  
Article
English Language Learning via YouTube: An NLP-Based Analysis of Users’ Comments
by Husam M. Alawadh, Amerah Alabrah, Talha Meraj and Hafiz Tayyab Rauf
Computers 2023, 12(2), 24; https://doi.org/10.3390/computers12020024 - 19 Jan 2023
Cited by 2 | Viewed by 3616
Abstract
Online teaching and learning has been beneficial in facilitating learning of English as a foreign language (EFL). In online EFL learning, YouTube is one of the most utilized information and communication technology (ICT) tools because of its inherent features that make it a [...] Read more.
Online teaching and learning has been beneficial in facilitating learning of English as a foreign language (EFL). In online EFL learning, YouTube is one of the most utilized information and communication technology (ICT) tools because of its inherent features that make it a unique environment for learners and educators. Many interesting aspects of YouTube-based learning can be beneficial in supplementing conventional classroom methods, and, therefore, such aspects must be identified. Previous scholarly work aimed at improving YouTube learning environment was predominantly conducted manually by gathering learners’ impressions through interviews and questionnaires to analyze the differences between YouTube- and classroom-based EFL learning. However, such methods are tedious and time-consuming and can lead to results that are of less generalizable implications. User comments on YouTube channels are useful in identifying such aspects, as they present a wealth of information related to the quality of the content provided, challenges the targeted audience faces, and areas of potential improvement. Therefore, in our current study, YouTube API is used to collect the comments of three randomly selected and popular YouTube channels. Following a data cleaning process, people’s sentiments about EFL learning were first identified via a TextBlob method. Second, the automated latent semantic analysis (LSA) method of topic finding was used to collect global and open-ended topics of discussion on YouTube-based EFL learning. Users’ sentiments on the most popular topics of discussion are discussed in this paper. Further, based on the results, hypothetical findings on YouTube EFL learning are provided as recommendation for future content, including more variety of the content covered, introduction of the meanings and punctuation following words, the design of the course such that it addresses a multinational audience of any age, and targeted teaching of each variety of English, such as British and American. We also make suggestions for learners of English who wish to utilize online and offline learning, which include finding the course of interest first based on one’s needs which can be discussed with a tutor or any English teacher to optimize the learning experience, participating in fearless educator–learner interaction and engagement, and asking other EFL learners for their previous experiences with learning online in order for the learner to maximize benefit. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies)
Show Figures

Figure 1

14 pages, 2018 KiB  
Article
SENSIPLUS-LM: A Low-Cost EIS-Enabled Microchip Enhanced with an Open-Source Tiny Machine Learning Toolchain
by Michele Vitelli, Gianni Cerro, Luca Gerevini, Gianfranco Miele, Andrea Ria and Mario Molinara
Computers 2023, 12(2), 23; https://doi.org/10.3390/computers12020023 - 19 Jan 2023
Viewed by 2031
Abstract
The technological step towards sensors’ miniaturization, low-cost platforms, and evolved communication paradigms is rapidly moving the monitoring and computation tasks to the edge, causing the joint use of the Internet of Things (IoT) and machine learning (ML) to be massively employed. Edge devices [...] Read more.
The technological step towards sensors’ miniaturization, low-cost platforms, and evolved communication paradigms is rapidly moving the monitoring and computation tasks to the edge, causing the joint use of the Internet of Things (IoT) and machine learning (ML) to be massively employed. Edge devices are often composed of sensors and actuators, and their behavior depends on the relative rapid inference of specific conditions. Therefore, the computation and decision-making processes become obsolete and ineffective by communicating raw data and leaving them to a centralized system. This paper responds to this need by proposing an integrated architecture, able to host both the sensing part and the learning and classifying mechanisms, empowered by ML, directly on board and thus able to overcome some of the limitations presented by off-the-shelf solutions. The presented system is based on a proprietary platform named SENSIPLUS, a multi-sensor device especially devoted to performing electrical impedance spectroscopy (EIS) on a wide frequency interval. The measurement acquisition, data processing, and embedded classification techniques are supported by a system capable of generating and compiling code automatically, which uses a toolchain to run inference routines on the edge. As a case study, the system capabilities of such a platform in this work are exploited for water quality assessment. The joint system, composed of the measurement platform and the developed toolchain, is named SENSIPLUS-LM, standing for SENSIPLUS learning machine. The introduction of the toolchain empowers the SENSIPLUS platform moving the inference phase of the machine learning algorithm to the edge, thus limiting the needs of external computing platforms. The software part, i.e., the developed toolchain, is available for free download from GitLab, as reported in this paper. Full article
(This article belongs to the Special Issue Sensors and Smart Cities 2023)
Show Figures

Graphical abstract

28 pages, 762 KiB  
Article
A Centralized Routing for Lifetime and Energy Optimization in WSNs Using Genetic Algorithm and Least-Square Policy Iteration
by Elvis Obi, Zoubir Mammeri and Okechukwu E. Ochia
Computers 2023, 12(2), 22; https://doi.org/10.3390/computers12020022 - 18 Jan 2023
Cited by 1 | Viewed by 1264
Abstract
Q-learning has been primarily used as one of the reinforcement learning (RL) techniques to find the optimal routing path in wireless sensor networks (WSNs). However, for the centralized RL-based routing protocols with a large state space and action space, the baseline Q-learning used [...] Read more.
Q-learning has been primarily used as one of the reinforcement learning (RL) techniques to find the optimal routing path in wireless sensor networks (WSNs). However, for the centralized RL-based routing protocols with a large state space and action space, the baseline Q-learning used to implement these protocols suffers from degradation in the convergence speed, network lifetime, and network energy consumption due to the large number of learning episodes required to learn the optimal routing path. To overcome these limitations, an efficient model-free RL-based technique called Least-Square Policy Iteration (LSPI) is proposed to optimize the network lifetime and energy consumption in WSNs. The resulting designed protocol is a Centralized Routing Protocol for Lifetime and Energy Optimization with a Genetic Algorithm (GA) and LSPI (CRPLEOGALSPI). Simulation results show that the CRPLEOGALSPI has improved performance in network lifetime and energy consumption compared to an existing Centralized Routing Protocol for Lifetime Optimization with GA and Q-learning (CRPLOGARL). This is because the CRPLEOGALSPI chooses a routing path in a given state considering all the possible routing paths, and it is not sensitive to the learning rate. Moreover, while the CRPLOGARL evaluates the optimal policy from the Q-values, the CRPLEOGALSPI updates the Q-values based on the most updated information regarding the network dynamics using weighted functions. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop