Next Issue
Volume 12, November
Previous Issue
Volume 12, September
 
 

Computers, Volume 12, Issue 10 (October 2023) – 28 articles

Cover Story (view full-size image): Distributed Computing Continuum Systems (DCCSs) have unleashed a computing paradigm that unifies various computing resources into an integrated continuum, including cloud, fog/edge, IoT, and mobile devices. First, we discuss the evolution of computing paradigms up to DCCSs. The general architectures and the benefits and limitations of each computing paradigm are analyzed. After that, our discussion continues about various computing devices to be part of DCCSs to achieve computational goals in current and futuristic applications. In addition, we delve into the key features and benefits of DCCSs from the perspective of current computing needs. Finally, we provide a comprehensive overview of emerging applications that need DCCS architecture and open challenges for the upcoming research. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
26 pages, 2681 KiB  
Review
Deepfake Attacks: Generation, Detection, Datasets, Challenges, and Research Directions
by Amal Naitali, Mohammed Ridouani, Fatima Salahdine and Naima Kaabouch
Computers 2023, 12(10), 216; https://doi.org/10.3390/computers12100216 - 23 Oct 2023
Cited by 1 | Viewed by 11967
Abstract
Recent years have seen a substantial increase in interest in deepfakes, a fast-developing field at the nexus of artificial intelligence and multimedia. These artificial media creations, made possible by deep learning algorithms, allow for the manipulation and creation of digital content that is [...] Read more.
Recent years have seen a substantial increase in interest in deepfakes, a fast-developing field at the nexus of artificial intelligence and multimedia. These artificial media creations, made possible by deep learning algorithms, allow for the manipulation and creation of digital content that is extremely realistic and challenging to identify from authentic content. Deepfakes can be used for entertainment, education, and research; however, they pose a range of significant problems across various domains, such as misinformation, political manipulation, propaganda, reputational damage, and fraud. This survey paper provides a general understanding of deepfakes and their creation; it also presents an overview of state-of-the-art detection techniques, existing datasets curated for deepfake research, as well as associated challenges and future research trends. By synthesizing existing knowledge and research, this survey aims to facilitate further advancements in deepfake detection and mitigation strategies, ultimately fostering a safer and more trustworthy digital environment. Full article
Show Figures

Figure 1

23 pages, 545 KiB  
Systematic Review
Application of Augmented Reality Interventions for Children with Autism Spectrum Disorder (ASD): A Systematic Review
by A. B. M. S. U. Doulah, Mirza Rasheduzzaman, Faed Ahmed Arnob, Farhana Sarker, Nipa Roy, Md. Anwar Ullah and Khondaker A. Mamun
Computers 2023, 12(10), 215; https://doi.org/10.3390/computers12100215 - 23 Oct 2023
Viewed by 2919
Abstract
Over the past 10 years, the use of augmented reality (AR) applications to assist individuals with special needs such as intellectual disabilities, autism spectrum disorder (ASD), and physical disabilities has become more widespread. The beneficial features of AR for individuals with autism have [...] Read more.
Over the past 10 years, the use of augmented reality (AR) applications to assist individuals with special needs such as intellectual disabilities, autism spectrum disorder (ASD), and physical disabilities has become more widespread. The beneficial features of AR for individuals with autism have driven a large amount of research into using this technology in assisting against autism-related impairments. This study aims to evaluate the effectiveness of AR in rehabilitating and training individuals with ASD through a systematic review using the PRISMA methodology. A comprehensive search of relevant databases was conducted, and 25 articles were selected for further investigation after being filtered based on inclusion criteria. The studies focused on areas such as social interaction, emotion recognition, cooperation, learning, cognitive skills, and living skills. The results showed that AR intervention was most effective in improving individuals’ social skills, followed by learning, behavioral, and living skills. This systematic review provides guidance for future research by highlighting the limitations in current research designs, control groups, sample sizes, and assessment and feedback methods. The findings indicate that augmented reality could be a useful and practical tool for supporting individuals with ASD in daily life activities and promoting their social interactions. Full article
(This article belongs to the Special Issue Extended or Mixed Reality (AR + VR): Technology and Applications)
Show Figures

Figure 1

29 pages, 1099 KiB  
Article
Dependability Patterns: A Survey
by Ingrid A. Buckley and Eduardo B. Fernandez
Computers 2023, 12(10), 214; https://doi.org/10.3390/computers12100214 - 21 Oct 2023
Viewed by 1810
Abstract
Patterns embody the experience and knowledge of designers and are effective ways to improve nonfunctional aspects of software systems. Although there are several catalogs and surveys of security patterns, there is no catalog or general survey about dependability patterns. Our survey presented an [...] Read more.
Patterns embody the experience and knowledge of designers and are effective ways to improve nonfunctional aspects of software systems. Although there are several catalogs and surveys of security patterns, there is no catalog or general survey about dependability patterns. Our survey presented an enumeration of dependability patterns, which include fault tolerance, reliability, safety, and availability patterns. After defining classification groups and showing basic pattern relationships, we showed the references to the publications where these patterns were introduced and enumerated their intents. Another objective was evaluating these patterns to see if their descriptions are appropriate for a possible catalog, which would make them useful to developers and researchers. We found that most of them need remodeling because they use ad hoc templates or no templates. We considered some models from which we can derive patterns and methodologies that incorporate the use of patterns to build dependable software systems. We also provided directions for research. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

21 pages, 8749 KiB  
Article
The SARS-CoV-2 Virus Detection with the Help of Artificial Intelligence (AI) and Monitoring the Disease Using Fractal Analysis
by Mihai-Virgil Nichita, Maria-Alexandra Paun, Vladimir-Alexandru Paun and Viorel-Puiu Paun
Computers 2023, 12(10), 213; https://doi.org/10.3390/computers12100213 - 21 Oct 2023
Viewed by 1274
Abstract
This paper introduces an AI model designed for the diagnosis and monitoring of the SARS-CoV-2 virus. The present artificial intelligence (AI) model founded on the machine learning concept was created for the identification/recognition, keeping under observation, and prediction of a patient’s clinical evaluation [...] Read more.
This paper introduces an AI model designed for the diagnosis and monitoring of the SARS-CoV-2 virus. The present artificial intelligence (AI) model founded on the machine learning concept was created for the identification/recognition, keeping under observation, and prediction of a patient’s clinical evaluation infected with the CoV-2 virus. The deep learning (DL)-initiated process (an AI subset) is punctually prepared to identify patterns and provide automated information to healthcare professionals. The AI algorithm is based on the fractal analysis of CT chest images, which is a practical guide to detecting the virus and establishing the degree of lung infection. CT pulmonary images, delivered by a free public source, were utilized for developing correct AI algorithms with the aim of COVID-19 virus observation/recognition, having access to coherent medical data, or not. The box-counting procedure was used with a predilection to determine the fractal parameters, the value of the fractal dimension, and the value of lacunarity. In the case of a confirmation, the analysed image is used as input data for a program responsible for measuring the degree of health impairment/damage using fractal analysis. The support of image scans with computer tomography assistance is solely the commencement part of a correctly established diagnostic. A profiled software framework has been used to perceive all the details collected. With the trained AI model, a maximum accuracy of 98.1% was obtained. This advanced procedure presents an important potential in the progress of an intricate medical solution to pulmonary disease evaluation. Full article
(This article belongs to the Special Issue Artificial Intelligence in Control)
Show Figures

Figure 1

14 pages, 1708 KiB  
Article
L-PRNU: Low-Complexity Privacy-Preserving PRNU-Based Camera Attribution Scheme
by Alan Huang and Justie Su-Tzu Juan
Computers 2023, 12(10), 212; https://doi.org/10.3390/computers12100212 - 20 Oct 2023
Viewed by 1328
Abstract
A personal camera fingerprint can be created from images in social media by using Photo Response Non-Uniformity (PRNU) noise, which is used to identify whether an unknown picture belongs to them. Social media has become ubiquitous in recent years and many of us [...] Read more.
A personal camera fingerprint can be created from images in social media by using Photo Response Non-Uniformity (PRNU) noise, which is used to identify whether an unknown picture belongs to them. Social media has become ubiquitous in recent years and many of us regularly share photos of our daily lives online. However, due to the ease of creating a PRNU-based camera fingerprint, the privacy leakage problem is taken more seriously. To address this issue, a security scheme based on Boneh–Goh–Nissim (BGN) encryption was proposed in 2021. While effective, the BGN encryption incurs a high run-time computational overhead due to its power computation. Therefore, we devised a new scheme to address this issue, employing polynomial encryption and pixel confusion methods, resulting in a computation time that is over ten times faster than BGN encryption. This eliminates the need to only send critical pixels to a Third-Party Expert in the previous method. Furthermore, our scheme does not require decryption, as polynomial encryption and pixel confusion do not alter the correlation value. Consequently, the scheme we presented surpasses previous methods in both theoretical analysis and experimental performance, being faster and more capable. Full article
(This article belongs to the Topic Modeling and Practice for Trustworthy and Secure Systems)
Show Figures

Figure 1

18 pages, 1840 KiB  
Article
Classifying the Main Technology Clusters and Assignees of Home Automation Networks Using Patent Classifications
by Konstantinos Charmanas, Konstantinos Georgiou, Nikolaos Mittas and Lefteris Angelis
Computers 2023, 12(10), 211; https://doi.org/10.3390/computers12100211 - 20 Oct 2023
Viewed by 1393
Abstract
Home automation technologies are a vital part of humanity, as they provide convenience in otherwise mundane and repetitive tasks. In recent years, given the development of the Internet of Things (IoT) and artificial intelligence (AI) sectors, these technologies have seen a tremendous rise, [...] Read more.
Home automation technologies are a vital part of humanity, as they provide convenience in otherwise mundane and repetitive tasks. In recent years, given the development of the Internet of Things (IoT) and artificial intelligence (AI) sectors, these technologies have seen a tremendous rise, both in the methodologies utilized and in their industrial impact. Hence, many organizations and companies are securing commercial rights by patenting such technologies. In this study, we employ an analysis of 8482 home automation patents from the United States Patent and Trademark Office (USPTO) to extract thematic clusters and distinguish those that drive the market and those that have declined over the course of time. Moreover, we identify prevalent competitors per cluster and analyze the results under the spectrum of their market impact and objectives. The key findings indicate that home automation networks encompass a variety of technological areas and organizations with diverse interests. Full article
Show Figures

Figure 1

18 pages, 398 KiB  
Article
Using Machine Learning and Routing Protocols for Optimizing Distributed SPARQL Queries in Collaboration
by Benjamin Warnke, Stefan Fischer and Sven Groppe
Computers 2023, 12(10), 210; https://doi.org/10.3390/computers12100210 - 17 Oct 2023
Viewed by 1552
Abstract
Due to increasing digitization, the amount of data in the Internet of Things (IoT) is constantly increasing. In order to be able to process queries efficiently, strategies must, therefore, be found to reduce the transmitted data as much as possible. SPARQL is particularly [...] Read more.
Due to increasing digitization, the amount of data in the Internet of Things (IoT) is constantly increasing. In order to be able to process queries efficiently, strategies must, therefore, be found to reduce the transmitted data as much as possible. SPARQL is particularly well-suited to the IoT environment because it can handle various data structures. Due to the flexibility of data structures, however, more data have to be joined again during processing. Therefore, a good join order is crucial as it significantly impacts the number of intermediate results. However, computing the best linking order is an NP-hard problem because the total number of possible linking orders increases exponentially with the number of inputs to be combined. In addition, there are different definitions of optimal join orders. Machine learning uses stochastic methods to achieve good results even with complex problems quickly. Other DBMSs also consider reducing network traffic but neglect the network topology. Network topology is crucial in IoT as devices are not evenly distributed. Therefore, we present new techniques for collaboration between routing, application, and machine learning. Our approach, which pushes the operators as close as possible to the data source, minimizes the produced network traffic by 10%. Additionally, the model can reduce the number of intermediate results by a factor of 100 in comparison to other state-of-the-art approaches. Full article
(This article belongs to the Special Issue Advances in Database Engineered Applications 2023)
Show Figures

Figure 1

25 pages, 433 KiB  
Review
On the Robustness of ML-Based Network Intrusion Detection Systems: An Adversarial and Distribution Shift Perspective
by Minxiao Wang, Ning Yang, Dulaj H. Gunasinghe and Ning Weng
Computers 2023, 12(10), 209; https://doi.org/10.3390/computers12100209 - 17 Oct 2023
Cited by 1 | Viewed by 2097
Abstract
Utilizing machine learning (ML)-based approaches for network intrusion detection systems (NIDSs) raises valid concerns due to the inherent susceptibility of current ML models to various threats. Of particular concern are two significant threats associated with ML: adversarial attacks and distribution shifts. Although there [...] Read more.
Utilizing machine learning (ML)-based approaches for network intrusion detection systems (NIDSs) raises valid concerns due to the inherent susceptibility of current ML models to various threats. Of particular concern are two significant threats associated with ML: adversarial attacks and distribution shifts. Although there has been a growing emphasis on researching the robustness of ML, current studies primarily concentrate on addressing specific challenges individually. These studies tend to target a particular aspect of robustness and propose innovative techniques to enhance that specific aspect. However, as a capability to respond to unexpected situations, the robustness of ML should be comprehensively built and maintained in every stage. In this paper, we aim to link the varying efforts throughout the whole ML workflow to guide the design of ML-based NIDSs with systematic robustness. Toward this goal, we conduct a methodical evaluation of the progress made thus far in enhancing the robustness of the targeted NIDS application task. Specifically, we delve into the robustness aspects of ML-based NIDSs against adversarial attacks and distribution shift scenarios. For each perspective, we organize the literature in robustness-related challenges and technical solutions based on the ML workflow. For instance, we introduce some advanced potential solutions that can improve robustness, such as data augmentation, contrastive learning, and robustness certification. According to our survey, we identify and discuss the ML robustness research gaps and future direction in the field of NIDS. Finally, we highlight that building and patching robustness throughout the life cycle of an ML-based NIDS is critical. Full article
(This article belongs to the Special Issue Big Data Analytic for Cyber Crime Investigation and Prevention 2023)
Show Figures

Figure 1

19 pages, 3406 KiB  
Article
Constructing and Visualizing Uniform Tilings
by Nelson Max
Computers 2023, 12(10), 208; https://doi.org/10.3390/computers12100208 - 17 Oct 2023
Viewed by 1289
Abstract
This paper describes a system which takes user input of a pattern of regular polygons around one vertex and attempts to construct a uniform tiling with the same pattern at every vertex by adding one polygon at a time. The system constructs spherical, [...] Read more.
This paper describes a system which takes user input of a pattern of regular polygons around one vertex and attempts to construct a uniform tiling with the same pattern at every vertex by adding one polygon at a time. The system constructs spherical, planar, or hyperbolic tilings when the sum of the interior angles of the user-specified regular polygons is respectively less than, equal to, or greater than 360. Other works have catalogued uniform tilings in tables and/or illustrations. In contrast, this system was developed as an interactive educational tool for people to learn about symmetry and tilings by trial and error through proposing potential vertex patterns and investigating whether they work. Users can watch the rest of the polygons being automatically added one by one with recursive backtracking. When a trial polygon addition is found to violate the conditions of a regular tiling, polygons are removed one by one until a configuration with another compatible choice is found, and that choice is tried next. Full article
Show Figures

Figure 1

17 pages, 13529 KiB  
Article
Augmented Reality in Primary Education: An Active Learning Approach in Mathematics
by Christina Volioti, Christos Orovas, Theodosios Sapounidis, George Trachanas and Euclid Keramopoulos
Computers 2023, 12(10), 207; https://doi.org/10.3390/computers12100207 - 16 Oct 2023
Cited by 2 | Viewed by 1867
Abstract
Active learning, a student-centered approach, engages students in the learning process and requires them to solve problems using educational activities that enhance their learning outcomes. Augmented Reality (AR) has revolutionized the field of education by creating an intuitive environment where real and virtual [...] Read more.
Active learning, a student-centered approach, engages students in the learning process and requires them to solve problems using educational activities that enhance their learning outcomes. Augmented Reality (AR) has revolutionized the field of education by creating an intuitive environment where real and virtual objects interact, thereby facilitating the understanding of complex concepts. Consequently, this research proposes an application, called “Cooking Math”, that utilizes AR to promote active learning in sixth-grade elementary school mathematics. The application comprises various educational games, each presenting a real-life problem, particularly focused on cooking recipes. To evaluate the usability of the proposed AR application, a pilot study was conducted involving three groups: (a) 65 undergraduate philosophy and education students, (b) 74 undergraduate engineering students, and (c) 35 sixth-grade elementary school students. To achieve this, (a) the System Usability Scale (SUS) questionnaire was provided to all participants and (b) semi-structured interviews were organized to gather the participants’ perspectives. The SUS results were quite satisfactory. In addition, the interviews’ outcomes indicated that the elementary students displayed enthusiasm, the philosophy and education students emphasized the pedagogy value of such technology, while the engineering students suggested that further improvements were necessary to enhance the effectiveness of the learning experience. Full article
(This article belongs to the Special Issue Xtended or Mixed Reality (AR+VR) for Education 2024)
Show Figures

Figure 1

18 pages, 2324 KiB  
Article
The Potential of Machine Learning for Wind Speed and Direction Short-Term Forecasting: A Systematic Review
by Décio Alves, Fábio Mendonça, Sheikh Shanawaz Mostafa and Fernando Morgado-Dias
Computers 2023, 12(10), 206; https://doi.org/10.3390/computers12100206 - 13 Oct 2023
Viewed by 1668
Abstract
Wind forecasting, which is essential for numerous services and safety, has significantly improved in accuracy due to machine learning advancements. This study reviews 23 articles from 1983 to 2023 on machine learning for wind speed and direction nowcasting. The wind prediction ranged from [...] Read more.
Wind forecasting, which is essential for numerous services and safety, has significantly improved in accuracy due to machine learning advancements. This study reviews 23 articles from 1983 to 2023 on machine learning for wind speed and direction nowcasting. The wind prediction ranged from 1 min to 1 week, with more articles at lower temporal resolutions. Most works employed neural networks, focusing recently on deep learning models. Among the reported performance metrics, the most prevalent were mean absolute error, mean squared error, and mean absolute percentage error. Considering these metrics, the mean performance of the examined works was 0.56 m/s, 1.10 m/s, and 6.72%, respectively. The results underscore the novel effectiveness of machine learning in predicting wind conditions using high-resolution time data and demonstrated that deep learning models surpassed traditional methods, improving the accuracy of wind speed and direction forecasts. Moreover, it was found that the inclusion of non-wind weather variables does not benefit the model’s overall performance. Further studies are recommended to predict both wind speed and direction using diverse spatial data points, and high-resolution data are recommended along with the usage of deep learning models. Full article
Show Figures

Figure 1

18 pages, 3440 KiB  
Article
Novel Optimized Strategy Based on Multi-Next-Hops Election to Reduce Video Transmission Delay for GPSR Protocol over VANETs
by Imane Zaimi, Abdelali Boushaba, Mohammed Oumsis, Brahim Jabir, Moulay Hafid Aabidi and Adil EL Makrani
Computers 2023, 12(10), 205; https://doi.org/10.3390/computers12100205 - 12 Oct 2023
Viewed by 1259
Abstract
Reducing transmission traffic delay is one of the most important issues that need to be considered for routing protocols, especially in the case of multimedia applications over vehicular ad hoc networks (VANET). To this end, we propose an extension of the FzGR (fuzzy [...] Read more.
Reducing transmission traffic delay is one of the most important issues that need to be considered for routing protocols, especially in the case of multimedia applications over vehicular ad hoc networks (VANET). To this end, we propose an extension of the FzGR (fuzzy geographical routing protocol), named MNH-FGR (multi-next-hops fuzzy geographical routing protocol). MNH-FGR is a multipath protocol that gains great extensibility by employing different link metrics and weight functions. To schedule multimedia traffic among multiple heterogeneous links, MNH-FGR integrates the weighted round-robin (WRR) scheduling algorithm, where the link weights, needed for scheduling, are computed using the multi-constrained QoS metric provided by the FzGR. The main goal is to ensure the stability of the network and the continuity of data flow during transmission. Simulation experiments with NS-2 are presented in order to validate our proposal. Additionally, we present a neural network algorithm to analyze and optimize the performance of routing protocols. The results show that MNH-FGR could satisfy critical multimedia applications with high on-time constraints. Also, the DNN model used can provide insights about which features had an impact on protocol performance. Full article
(This article belongs to the Special Issue Edge and Fog Computing for Internet of Things Systems 2023)
Show Figures

Figure 1

36 pages, 702 KiB  
Article
Determining Resampling Ratios Using BSMOTE and SVM-SMOTE for Identifying Rare Attacks in Imbalanced Cybersecurity Data
by Sikha S. Bagui, Dustin Mink, Subhash C. Bagui and Sakthivel Subramaniam
Computers 2023, 12(10), 204; https://doi.org/10.3390/computers12100204 - 11 Oct 2023
Cited by 2 | Viewed by 1371
Abstract
Machine Learning is widely used in cybersecurity for detecting network intrusions. Though network attacks are increasing steadily, the percentage of such attacks to actual network traffic is significantly less. And here lies the problem in training Machine Learning models to enable them to [...] Read more.
Machine Learning is widely used in cybersecurity for detecting network intrusions. Though network attacks are increasing steadily, the percentage of such attacks to actual network traffic is significantly less. And here lies the problem in training Machine Learning models to enable them to detect and classify malicious attacks from routine traffic. The ratio of actual attacks to benign data is significantly high and as such forms highly imbalanced datasets. In this work, we address this issue using data resampling techniques. Though there are several oversampling and undersampling techniques available, how these oversampling and undersampling techniques are most effectively used is addressed in this paper. Two oversampling techniques, Borderline SMOTE and SVM-SMOTE, are used for oversampling minority data and random undersampling is used for undersampling majority data. Both the oversampling techniques use KNN after selecting a random minority sample point, hence the impact of varying KNN values on the performance of the oversampling technique is also analyzed. Random Forest is used for classification of the rare attacks. This work is done on a widely used cybersecurity dataset, UNSW-NB15, and the results show that 10% oversampling gives better results for both BMSOTE and SVM-SMOTE. Full article
(This article belongs to the Special Issue Big Data Analytic for Cyber Crime Investigation and Prevention 2023)
Show Figures

Figure 1

14 pages, 512 KiB  
Article
QoS-Aware and Energy Data Management in Industrial IoT
by Yarob Abdullah and Zeinab Movahedi
Computers 2023, 12(10), 203; https://doi.org/10.3390/computers12100203 - 10 Oct 2023
Viewed by 1085
Abstract
Two crucial challenges in Industry 4.0 involve maintaining critical latency requirements for data access and ensuring efficient power consumption by field devices. Traditional centralized industrial networks that provide rudimentary data distribution capabilities may not be able to meet such stringent requirements. These requirements [...] Read more.
Two crucial challenges in Industry 4.0 involve maintaining critical latency requirements for data access and ensuring efficient power consumption by field devices. Traditional centralized industrial networks that provide rudimentary data distribution capabilities may not be able to meet such stringent requirements. These requirements cannot be met later due to connection or node failures or extreme performance decadence. To address this problem, this paper focuses on resource-constrained networks of Internet of Things (IoT) systems, exploiting the presence of several more powerful nodes acting as distributed local data storage proxies for every IoT set. To increase the battery lifetime of the network, a number of nodes that are not included in data transmission or data storage are turned off. In this paper, we investigate the issue of maximizing network lifetime, and consider the restrictions on data access latency. For this purpose, data are cached distributively in proxy nodes, leading to a reduction in energy consumption and ultimately maximizing network lifetime. To address this problem, we introduce an energy-aware data management method (EDMM); with the goal of extending network lifetime, select IoT nodes are designated to save data distributively. Our proposed approach (1) makes sure that data access latency is underneath a specified threshold and (2) performs well with respect to network lifetime compared to an offline centralized heuristic algorithm. Full article
Show Figures

Figure 1

20 pages, 2140 KiB  
Article
An Information Security Engineering Framework for Modeling Packet Filtering Firewall Using Neutrosophic Petri Nets
by Jamal Khudair Madhloom, Zainab Hammoodi Noori, Sif K. Ebis, Oday A. Hassen and Saad M. Darwish
Computers 2023, 12(10), 202; https://doi.org/10.3390/computers12100202 - 08 Oct 2023
Cited by 1 | Viewed by 2040
Abstract
Due to the Internet’s explosive growth, network security is now a major concern; as a result, tracking network traffic is essential for a variety of uses, including improving system efficiency, fixing bugs in the network, and keeping sensitive data secure. Firewalls are a [...] Read more.
Due to the Internet’s explosive growth, network security is now a major concern; as a result, tracking network traffic is essential for a variety of uses, including improving system efficiency, fixing bugs in the network, and keeping sensitive data secure. Firewalls are a crucial component of enterprise-wide security architectures because they protect individual networks from intrusion. The efficiency of a firewall can be negatively impacted by issues with its design, configuration, monitoring, and administration. Recent firewall security methods do not have the rigor to manage the vagueness that comes with filtering packets from the exterior. Knowledge representation and reasoning are two areas where fuzzy Petri nets (FPNs) receive extensive usage as a modeling tool. Despite their widespread success, FPNs’ limitations in the security engineering field stem from the fact that it is difficult to represent different kinds of uncertainty. This article details the construction of a novel packet-filtering firewall model that addresses the limitations of current FPN-based filtering methods. The primary contribution is to employ Simplified Neutrosophic Petri nets (SNPNs) as a tool for modeling discrete event systems in the area of firewall packet filtering that are characterized by imprecise knowledge. Because of SNPNs’ symbolic ability, the packet filtration model can be quickly and easily established, examined, enhanced, and maintained. Based on the idea that the ambiguity of a packet’s movement can be described by if–then fuzzy production rules realized by the truth-membership function, the indeterminacy-membership function, and the falsity-membership functional, we adopt the neutrosophic logic for modelling PN transition objects. In addition, we simulate the dynamic behavior of the tracking system in light of the ambiguity inherent in packet filtering by presenting a two-level filtering method to improve the ranking of the filtering rules list. Results from experiments on a local area network back up the efficacy of the proposed method and illustrate how it can increase the firewall’s susceptibility to threats posed by network traffic. Full article
(This article belongs to the Special Issue Using New Technologies on Cyber Security Solutions)
Show Figures

Figure 1

20 pages, 1170 KiB  
Article
MalFe—Malware Feature Engineering Generation Platform
by Avinash Singh, Richard Adeyemi Ikuesan and Hein Venter
Computers 2023, 12(10), 201; https://doi.org/10.3390/computers12100201 - 08 Oct 2023
Cited by 1 | Viewed by 1238
Abstract
The growing sophistication of malware has resulted in diverse challenges, especially among security researchers who are expected to develop mechanisms to thwart these malicious attacks. While security researchers have turned to machine learning to combat this surge in malware attacks and enhance detection [...] Read more.
The growing sophistication of malware has resulted in diverse challenges, especially among security researchers who are expected to develop mechanisms to thwart these malicious attacks. While security researchers have turned to machine learning to combat this surge in malware attacks and enhance detection and prevention methods, they often encounter limitations when it comes to sourcing malware binaries. This limitation places the burden on malware researchers to create context-specific datasets and detection mechanisms, a time-consuming and intricate process that involves a series of experiments. The lack of accessible analysis reports and a centralized platform for sharing and verifying findings has resulted in many research outputs that can neither be replicated nor validated. To address this critical gap, a malware analysis data curation platform was developed. This platform offers malware researchers a highly customizable feature generation process drawing from analysis data reports, particularly those generated in sandbox-based environments such as Cuckoo Sandbox. To evaluate the effectiveness of the platform, a replication of existing studies was conducted in the form of case studies. These studies revealed that the developed platform offers an effective approach that can aid malware detection research. Moreover, a real-world scenario involving over 3000 ransomware and benign samples for ransomware detection based on PE entropy was explored. This yielded an impressive accuracy score of 98.8% and an AUC of 0.97 when employing the decision tree algorithm, with a low latency of 1.51 ms. These results emphasize the necessity of the proposed platform while demonstrating its capacity to construct a comprehensive detection mechanism. By fostering community-driven interactive databanks, this platform enables the creation of datasets as well as the sharing of reports, both of which can substantially reduce experimentation time and enhance research repeatability. Full article
(This article belongs to the Special Issue Using New Technologies on Cyber Security Solutions)
Show Figures

Figure 1

22 pages, 1156 KiB  
Article
Cervical Cancer Diagnosis Using Stacked Ensemble Model and Optimized Feature Selection: An Explainable Artificial Intelligence Approach
by Abdulaziz AlMohimeed, Hager Saleh, Sherif Mostafa, Redhwan M. A. Saad and Amira Samy Talaat
Computers 2023, 12(10), 200; https://doi.org/10.3390/computers12100200 - 07 Oct 2023
Viewed by 1624
Abstract
Cervical cancer affects more than half a million women worldwide each year and causes over 300,000 deaths. The main goals of this paper are to study the effect of applying feature selection methods with stacking models for the prediction of cervical cancer, propose [...] Read more.
Cervical cancer affects more than half a million women worldwide each year and causes over 300,000 deaths. The main goals of this paper are to study the effect of applying feature selection methods with stacking models for the prediction of cervical cancer, propose stacking ensemble learning that combines different models with meta-learners to predict cervical cancer, and explore the black-box of the stacking model with the best-optimized features using explainable artificial intelligence (XAI). A cervical cancer dataset from the machine learning repository (UCI) that is highly imbalanced and contains missing values is used. Therefore, SMOTE-Tomek was used to combine under-sampling and over-sampling to handle imbalanced data, and pre-processing steps are implemented to hold missing values. Bayesian optimization optimizes models and selects the best model architecture. Chi-square scores, recursive feature removal, and tree-based feature selection are three feature selection techniques that are applied to the dataset For determining the factors that are most crucial for predicting cervical cancer, the stacking model is extended to multiple levels: Level 1 (multiple base learners) and Level 2 (meta-learner). At Level 1, stacking (training and testing stacking) is employed for combining the output of multi-base models, while training stacking is used to train meta-learner models at level 2. Testing stacking is used to evaluate meta-learner models. The results showed that based on the selected features from recursive feature elimination (RFE), the stacking model has higher accuracy, precision, recall, f1-score, and AUC. Furthermore, To assure the efficiency, efficacy, and reliability of the produced model, local and global explanations are provided. Full article
(This article belongs to the Special Issue Future Systems Based on Healthcare 5.0 for Pandemic Preparedness)
Show Figures

Figure 1

19 pages, 1108 KiB  
Article
Enhancing Learning Personalization in Educational Environments through Ontology-Based Knowledge Representation
by William Villegas-Ch and Joselin García-Ortiz
Computers 2023, 12(10), 199; https://doi.org/10.3390/computers12100199 - 04 Oct 2023
Cited by 2 | Viewed by 2222
Abstract
In the digital age, the personalization of learning has become a critical priority in education. This article delves into the cutting-edge of educational innovation by exploring the essential role of ontology-based knowledge representation in transforming the educational experience. This research stands out for [...] Read more.
In the digital age, the personalization of learning has become a critical priority in education. This article delves into the cutting-edge of educational innovation by exploring the essential role of ontology-based knowledge representation in transforming the educational experience. This research stands out for its significant and distinctive contribution to improving the personalization of learning. For this, concrete examples of use cases are presented in various academic fields, from formal education to corporate training and online learning. It is identified how ontologies capture and organize knowledge semantically, allowing the intelligent adaptation of content, the inference of activity and resource recommendations, and the creation of highly personalized learning paths. In this context, the novelty lies in the innovative approach to designing educational ontologies, which exhaustively considers different use cases and academic scenarios. Additionally, we delve deeper into the design decisions that support the effectiveness and usefulness of these ontologies for effective learning personalization. Through practical examples, it is illustrated how the implementation of ontologies transforms education, offering richer educational experiences adapted to students’ individual needs. This research represents a valuable contribution to personalized education and knowledge management in contemporary educational environments. The novelty of this work lies in its ability to redefine and improve the personalization of learning in a constantly evolving digital world. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies)
Show Figures

Figure 1

29 pages, 2949 KiB  
Article
Exploring the Potential of Distributed Computing Continuum Systems
by Praveen Kumar Donta, Ilir Murturi, Victor Casamayor Pujol, Boris Sedlak and Schahram Dustdar
Computers 2023, 12(10), 198; https://doi.org/10.3390/computers12100198 - 02 Oct 2023
Cited by 2 | Viewed by 2633
Abstract
Computing paradigms have evolved significantly in recent decades, moving from large room-sized resources (processors and memory) to incredibly small computing nodes. Recently, the power of computing has attracted almost all current application fields. Currently, distributed computing continuum systems (DCCSs) are unleashing the era [...] Read more.
Computing paradigms have evolved significantly in recent decades, moving from large room-sized resources (processors and memory) to incredibly small computing nodes. Recently, the power of computing has attracted almost all current application fields. Currently, distributed computing continuum systems (DCCSs) are unleashing the era of a computing paradigm that unifies various computing resources, including cloud, fog/edge computing, the Internet of Things (IoT), and mobile devices into a seamless and integrated continuum. Its seamless infrastructure efficiently manages diverse processing loads and ensures a consistent user experience. Furthermore, it provides a holistic solution to meet modern computing needs. In this context, this paper presents a deeper understanding of DCCSs’ potential in today’s computing environment. First, we discuss the evolution of computing paradigms up to DCCS. The general architectures, components, and various computing devices are discussed, and the benefits and limitations of each computing paradigm are analyzed. After that, our discussion continues into various computing devices that constitute part of DCCS to achieve computational goals in current and futuristic applications. In addition, we delve into the key features and benefits of DCCS from the perspective of current computing needs. Furthermore, we provide a comprehensive overview of emerging applications (with a case study analysis) that desperately need DCCS architectures to perform their tasks. Finally, we describe the open challenges and possible developments that need to be made to DCCS to unleash its widespread potential for the majority of applications. Full article
(This article belongs to the Special Issue Artificial Intelligence in Industrial IoT Applications)
Show Figures

Figure 1

13 pages, 617 KiB  
Article
Comparison of Automated Machine Learning (AutoML) Tools for Epileptic Seizure Detection Using Electroencephalograms (EEG)
by Swetha Lenkala, Revathi Marry, Susmitha Reddy Gopovaram, Tahir Cetin Akinci and Oguzhan Topsakal
Computers 2023, 12(10), 197; https://doi.org/10.3390/computers12100197 - 29 Sep 2023
Cited by 4 | Viewed by 1709
Abstract
Epilepsy is a neurological disease characterized by recurrent seizures caused by abnormal electrical activity in the brain. One of the methods used to diagnose epilepsy is through electroencephalogram (EEG) analysis. EEG is a non-invasive medical test for quantifying electrical activity in the brain. [...] Read more.
Epilepsy is a neurological disease characterized by recurrent seizures caused by abnormal electrical activity in the brain. One of the methods used to diagnose epilepsy is through electroencephalogram (EEG) analysis. EEG is a non-invasive medical test for quantifying electrical activity in the brain. Applying machine learning (ML) to EEG data for epilepsy diagnosis has the potential to be more accurate and efficient. However, expert knowledge is required to set up the ML model with correct hyperparameters. Automated machine learning (AutoML) tools aim to make ML more accessible to non-experts and automate many ML processes to create a high-performing ML model. This article explores the use of automated machine learning (AutoML) tools for diagnosing epilepsy using electroencephalogram (EEG) data. The study compares the performance of three different AutoML tools, AutoGluon, Auto-Sklearn, and Amazon Sagemaker, on three different datasets from the UC Irvine ML Repository, Bonn EEG time series dataset, and Zenodo. Performance measures used for evaluation include accuracy, F1 score, recall, and precision. The results show that all three AutoML tools were able to generate high-performing ML models for the diagnosis of epilepsy. The generated ML models perform better when the training dataset is larger in size. Amazon Sagemaker and Auto-Sklearn performed better with smaller datasets. This is the first study to compare several AutoML tools and shows that AutoML tools can be utilized to create well-performing solutions for the diagnosis of epilepsy via processing hard-to-analyze EEG timeseries data. Full article
(This article belongs to the Special Issue Artificial Intelligence in Control)
Show Figures

Figure 1

25 pages, 1719 KiB  
Article
An Improved Dandelion Optimizer Algorithm for Spam Detection: Next-Generation Email Filtering System
by Mohammad Tubishat, Feras Al-Obeidat, Ali Safaa Sadiq and Seyedali Mirjalili
Computers 2023, 12(10), 196; https://doi.org/10.3390/computers12100196 - 28 Sep 2023
Viewed by 1511
Abstract
Spam emails have become a pervasive issue in recent years, as internet users receive increasing amounts of unwanted or fake emails. To combat this issue, automatic spam detection methods have been proposed, which aim to classify emails into spam and non-spam categories. Machine [...] Read more.
Spam emails have become a pervasive issue in recent years, as internet users receive increasing amounts of unwanted or fake emails. To combat this issue, automatic spam detection methods have been proposed, which aim to classify emails into spam and non-spam categories. Machine learning techniques have been utilized for this task with considerable success. In this paper, we introduce a novel approach to spam email detection by presenting significant advancements to the Dandelion Optimizer (DO) algorithm. The DO is a relatively new nature-inspired optimization algorithm inspired by the flight of dandelion seeds. While the DO shows promise, it faces challenges, especially in high-dimensional problems such as feature selection for spam detection. Our primary contributions focus on enhancing the DO algorithm. Firstly, we introduce a new local search algorithm based on flipping (LSAF), designed to improve the DO’s ability to find the best solutions. Secondly, we propose a reduction equation that streamlines the population size during algorithm execution, reducing computational complexity. To showcase the effectiveness of our modified DO algorithm, which we refer to as the Improved DO (IDO), we conduct a comprehensive evaluation using the Spam base dataset from the UCI repository. However, we emphasize that our primary objective is to advance the DO algorithm, with spam email detection serving as a case study application. Comparative analysis against several popular algorithms, including Particle Swarm Optimization (PSO), the Genetic Algorithm (GA), Generalized Normal Distribution Optimization (GNDO), the Chimp Optimization Algorithm (ChOA), the Grasshopper Optimization Algorithm (GOA), Ant Lion Optimizer (ALO), and the Dragonfly Algorithm (DA), demonstrates the superior performance of our proposed IDO algorithm. It excels in accuracy, fitness, and the number of selected features, among other metrics. Our results clearly indicate that the IDO overcomes the local optima problem commonly associated with the standard DO algorithm, owing to the incorporation of LSAF and the reduction in equation methods. In summary, our paper underscores the significant advancement made in the form of the IDO algorithm, which represents a promising approach for solving high-dimensional optimization problems, with a keen focus on practical applications in real-world systems. While we employ spam email detection as a case study, our primary contribution lies in the improved DO algorithm, which is efficient, accurate, and outperforms several state-of-the-art algorithms in various metrics. This work opens avenues for enhancing optimization techniques and their applications in machine learning. Full article
(This article belongs to the Topic Modeling and Practice for Trustworthy and Secure Systems)
Show Figures

Figure 1

16 pages, 1791 KiB  
Article
Rapidrift: Elementary Techniques to Improve Machine Learning-Based Malware Detection
by Abishek Manikandaraja, Peter Aaby and Nikolaos Pitropakis
Computers 2023, 12(10), 195; https://doi.org/10.3390/computers12100195 - 28 Sep 2023
Viewed by 1403
Abstract
Artificial intelligence and machine learning have become a necessary part of modern living along with the increased adoption of new computational devices. Because machine learning and artificial intelligence can detect malware better than traditional signature detection, the development of new and novel malware [...] Read more.
Artificial intelligence and machine learning have become a necessary part of modern living along with the increased adoption of new computational devices. Because machine learning and artificial intelligence can detect malware better than traditional signature detection, the development of new and novel malware aiming to bypass detection has caused a challenge where models may experience concept drift. However, as new malware samples appear, the detection performance drops. Our work aims to discuss the performance degradation of machine learning-based malware detectors with time, also called concept drift. To achieve this goal, we develop a Python-based framework, namely Rapidrift, capable of analysing the concept drift at a more granular level. We also created two new malware datasets, TRITIUM and INFRENO, from different sources and threat profiles to conduct a deeper analysis of the concept drift problem. To test the effectiveness of Rapidrift, various fundamental methods that could reduce the effects of concept drift were experimentally explored. Full article
(This article belongs to the Special Issue Software-Defined Internet of Everything)
Show Figures

Figure 1

17 pages, 556 KiB  
Article
Predictive Modeling of Student Dropout in MOOCs and Self-Regulated Learning
by Georgios Psathas, Theano K. Chatzidaki and Stavros N. Demetriadis
Computers 2023, 12(10), 194; https://doi.org/10.3390/computers12100194 - 27 Sep 2023
Cited by 2 | Viewed by 1723
Abstract
The primary objective of this study is to examine the factors that contribute to the early prediction of Massive Open Online Courses (MOOCs) dropouts in order to identify and support at-risk students. We utilize MOOC data of specific duration, with a guided study [...] Read more.
The primary objective of this study is to examine the factors that contribute to the early prediction of Massive Open Online Courses (MOOCs) dropouts in order to identify and support at-risk students. We utilize MOOC data of specific duration, with a guided study pace. The dataset exhibits class imbalance, and we apply oversampling techniques to ensure data balancing and unbiased prediction. We examine the predictive performance of five classic classification machine learning (ML) algorithms under four different oversampling techniques and various evaluation metrics. Additionally, we explore the influence of self-reported self-regulated learning (SRL) data provided by students and various other prominent features of MOOCs as potential indicators of early stage dropout prediction. The research questions focus on (1) the performance of the classic classification ML models using various evaluation metrics before and after different methods of oversampling, (2) which self-reported data may constitute crucial predictors for dropout propensity, and (3) the effect of the SRL factor on the dropout prediction performance. The main conclusions are: (1) prominent predictors, including employment status, frequency of chat tool usage, prior subject-related experiences, gender, education, and willingness to participate, exhibit remarkable efficacy in achieving high to excellent recall performance, particularly when specific combinations of algorithms and oversampling methods are applied, (2) self-reported SRL factor, combined with easily provided/self-reported features, performed well as a predictor in terms of recall when LR and SVM algorithms were employed, (3) it is crucial to test diverse machine learning algorithms and oversampling methods in predictive modeling. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning)
Show Figures

Figure 1

19 pages, 3321 KiB  
Article
Prospective ICT Teachers’ Perceptions on the Didactic Utility and Player Experience of a Serious Game for Safe Internet Use and Digital Intelligence Competencies
by Aikaterini Georgiadou and Stelios Xinogalos
Computers 2023, 12(10), 193; https://doi.org/10.3390/computers12100193 - 26 Sep 2023
Viewed by 1037
Abstract
Nowadays, young students spend a lot of time playing video games and browsing on the Internet. Using the Internet has become even more widespread for young students due to the COVID-19 pandemic lockdown, which resulted in transferring several educational activities online. The Internet [...] Read more.
Nowadays, young students spend a lot of time playing video games and browsing on the Internet. Using the Internet has become even more widespread for young students due to the COVID-19 pandemic lockdown, which resulted in transferring several educational activities online. The Internet and generally the digital world that we live in offers many possibilities in our everyday lives, but it also entails dangers such as cyber threats and unethical use of personal data. It is widely accepted that everyone, especially young students, should be educated on safe Internet use and should be supported on acquiring other Digital Intelligence (DI) competencies as well. Towards this goal, we present the design and evaluation of the game “Follow the Paws” that aims to educate primary school students on safe Internet use and support them in acquiring relevant DI competencies. The game was designed taking into account relevant literature and was evaluated by 213 prospective Information and Communication Technology (ICT) teachers. The participants playtested the game and evaluated it through an online questionnaire that was based on validated instruments proposed in the literature. The participants evaluated positively to the didactic utility of the game and the anticipated player experience, while they highlighted several improvements to be taken into consideration in a future revision of the game. Based on the results, proposals for further research are presented, including DI competencies detection through the game and evaluating its actual effectiveness in the classroom. Full article
(This article belongs to the Special Issue Game-Based Learning, Gamification in Education and Serious Games 2023)
Show Figures

Figure 1

16 pages, 269 KiB  
Article
Explain Trace: Misconceptions of Control-Flow Statements
by Oleg Sychev and Mikhail Denisov
Computers 2023, 12(10), 192; https://doi.org/10.3390/computers12100192 - 24 Sep 2023
Viewed by 1188
Abstract
Control-flow statements often cause misunderstandings among novice computer science students. To better address these problems, teachers need to know the misconceptions that are typical at this stage. In this paper, we present the results of studying students’ misconceptions about control-flow statements. We compiled [...] Read more.
Control-flow statements often cause misunderstandings among novice computer science students. To better address these problems, teachers need to know the misconceptions that are typical at this stage. In this paper, we present the results of studying students’ misconceptions about control-flow statements. We compiled 181 questions, each containing an algorithm written in pseudocode and the execution trace of that algorithm. Some of the traces were correct; others contained highlighted errors. The students were asked to explain in their own words why the selected line of the trace was correct or erroneous. We collected and processed 10,799 answers from 67 CS1 students. Among the 24 misconceptions we found, 6 coincided with misconceptions from other studies, and 7 were narrower cases of known misconceptions. We did not find previous research regarding 11 of the misconceptions we identified. Full article
(This article belongs to the Special Issue Future Trends in Computer Programming Education)
Show Figures

Figure 1

27 pages, 3461 KiB  
Communication
Analyzing Public Reactions, Perceptions, and Attitudes during the MPox Outbreak: Findings from Topic Modeling of Tweets
by Nirmalya Thakur, Yuvraj Nihal Duggal and Zihui Liu
Computers 2023, 12(10), 191; https://doi.org/10.3390/computers12100191 - 23 Sep 2023
Cited by 3 | Viewed by 1513
Abstract
In the last decade and a half, the world has experienced outbreaks of a range of viruses such as COVID-19, H1N1, flu, Ebola, Zika virus, Middle East Respiratory Syndrome (MERS), measles, and West Nile virus, just to name a few. During these virus [...] Read more.
In the last decade and a half, the world has experienced outbreaks of a range of viruses such as COVID-19, H1N1, flu, Ebola, Zika virus, Middle East Respiratory Syndrome (MERS), measles, and West Nile virus, just to name a few. During these virus outbreaks, the usage and effectiveness of social media platforms increased significantly, as such platforms served as virtual communities, enabling their users to share and exchange information, news, perspectives, opinions, ideas, and comments related to the outbreaks. Analysis of this Big Data of conversations related to virus outbreaks using concepts of Natural Language Processing such as Topic Modeling has attracted the attention of researchers from different disciplines such as Healthcare, Epidemiology, Data Science, Medicine, and Computer Science. The recent outbreak of the MPox virus has resulted in a tremendous increase in the usage of Twitter. Prior works in this area of research have primarily focused on the sentiment analysis and content analysis of these Tweets, and the few works that have focused on topic modeling have multiple limitations. This paper aims to address this research gap and makes two scientific contributions to this field. First, it presents the results of performing Topic Modeling on 601,432 Tweets about the 2022 Mpox outbreak that were posted on Twitter between 7 May 2022 and 3 March 2023. The results indicate that the conversations on Twitter related to Mpox during this time range may be broadly categorized into four distinct themes—Views and Perspectives about Mpox, Updates on Cases and Investigations about Mpox, Mpox and the LGBTQIA+ Community, and Mpox and COVID-19. Second, the paper presents the findings from the analysis of these Tweets. The results show that the theme that was most popular on Twitter (in terms of the number of Tweets posted) during this time range was Views and Perspectives about Mpox. This was followed by the theme of Mpox and the LGBTQIA+ Community, which was followed by the themes of Mpox and COVID-19 and Updates on Cases and Investigations about Mpox, respectively. Finally, a comparison with related studies in this area of research is also presented to highlight the novelty and significance of this research work. Full article
Show Figures

Figure 1

14 pages, 3883 KiB  
Article
Model and Fuzzy Controller Design Approaches for Stability of Modern Robot Manipulators
by Shabnom Mustary, Mohammod Abul Kashem, Mohammad Asaduzzaman Chowdhury and Jia Uddin
Computers 2023, 12(10), 190; https://doi.org/10.3390/computers12100190 - 23 Sep 2023
Cited by 1 | Viewed by 1249
Abstract
Robotics is a crucial technology of Industry 4.0 that offers a diverse array of applications in the industrial sector. However, the quality of a robot’s manipulator is contingent on its stability, which is a function of the manipulator’s parameters. In previous studies, stability [...] Read more.
Robotics is a crucial technology of Industry 4.0 that offers a diverse array of applications in the industrial sector. However, the quality of a robot’s manipulator is contingent on its stability, which is a function of the manipulator’s parameters. In previous studies, stability has been evaluated based on a small number of manipulator parameters; as a result, there is not much information about the integration/optimal arrangement/combination of manipulator parameters toward stability. Through Lagrangian mechanics and the consideration of multiple parameters, a mathematical model of a modern manipulator is developed in this study. In this mathematical model, motor acceleration, moment of inertia, and deflection are considered in order to assess the level of stability of the ABB Robot manipulator of six degrees of freedom. A novel mathematical approach to stability is developed in which stability is correlated with motor acceleration, moment of inertia, and deflection. In addition to this, fuzzy logic inference principles are employed to determine the status of stability. The numerical data of different manipulator parameters are verified using mathematical approaches. Results indicated that as motor acceleration increases, stability increases, while stability decreases as moment of inertia and deflection increase. It is anticipated that the implementation of these findings will increase industrial output. Full article
(This article belongs to the Special Issue Artificial Intelligence in Control)
Show Figures

Figure 1

16 pages, 4914 KiB  
Article
Implementing Tensor-Organized Memory for Message Retrieval Purposes in Neuromorphic Chips
by Arash Khajooei Nejad, Mohammad (Behdad) Jamshidi and Shahriar B. Shokouhi
Computers 2023, 12(10), 189; https://doi.org/10.3390/computers12100189 - 22 Sep 2023
Viewed by 1285
Abstract
This paper introduces Tensor-Organized Memory (TOM), a novel neuromorphic architecture inspired by the human brain’s structural and functional principles. Utilizing spike-timing-dependent plasticity (STDP) and Hebbian rules, TOM exhibits cognitive behaviors similar to the human brain. Compared to conventional architectures using a simplified leaky [...] Read more.
This paper introduces Tensor-Organized Memory (TOM), a novel neuromorphic architecture inspired by the human brain’s structural and functional principles. Utilizing spike-timing-dependent plasticity (STDP) and Hebbian rules, TOM exhibits cognitive behaviors similar to the human brain. Compared to conventional architectures using a simplified leaky integrate-and-fire (LIF) neuron model, TOM showcases robust performance, even in noisy conditions. TOM’s adaptability and unique organizational structure, rooted in the Columnar-Organized Memory (COM) framework, position it as a transformative digital memory processing solution. Innovative neural architecture, advanced recognition mechanisms, and integration of synaptic plasticity rules enhance TOM’s cognitive capabilities. We have compared the TOM architecture with a conventional floating-point architecture, using a simplified LIF neuron model. We also implemented tests with varying noise levels and partially erased messages to evaluate its robustness. Despite the slight degradation in performance with noisy messages beyond 30%, the TOM architecture exhibited appreciable performance under less-than-ideal conditions. This exploration into the TOM architecture reveals its potential as a framework for future neuromorphic systems. This study lays the groundwork for future applications in implementing neuromorphic chips for high-performance intelligent edge devices, thereby revolutionizing industries and enhancing user experiences within the power of artificial intelligence. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop