Next Issue
Volume 12, February
Previous Issue
Volume 11, December
 
 

Computers, Volume 12, Issue 1 (January 2023) – 21 articles

Cover Story (view full-size image): Synchronous machines are widely used as generators and motors in the industrial areas of power engineering. The main reason for this is that they can work at a constant speed regardless of the load. They can be used to stabilize power systems by providing reactive power, which helps to maintain the system voltage and frequency by utilizing an excitation current parameter. In the conducted research, various artificial intelligence algorithms were applied to obtain a model that accurately estimates the values of the excitation current of the synchronous machine. The obtained results not only show that the value of the excitation current can be estimated using artificial intelligence, but is also more precise compared to conventional methods in the field of electric motor drives. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
13 pages, 810 KiB  
Article
An Enhanced Virtual Cord Protocol Based Multi-Casting Strategy for the Effective and Efficient Management of Mobile Ad Hoc Networks
by Sohaib Latif, Xianwen Fang, Syed Muhammad Mohsin, Syed Muhammad Abrar Akber, Sheraz Aslam, Hana Mujlid and Kaleem Ullah
Computers 2023, 12(1), 21; https://doi.org/10.3390/computers12010021 - 16 Jan 2023
Cited by 1 | Viewed by 2421
Abstract
To solve problems with limited resources such as power, storage, bandwidth, and connectivity, efficient and effective data management solutions are needed. It is believed that the most successful algorithms for circumventing these constraints are those that self-organise and collaborate. To make the best [...] Read more.
To solve problems with limited resources such as power, storage, bandwidth, and connectivity, efficient and effective data management solutions are needed. It is believed that the most successful algorithms for circumventing these constraints are those that self-organise and collaborate. To make the best use of available bandwidth, mobile ad hoc networks (MANETs) employ the strategy of multi-casting. The communication cost of any network can be significantly reduced by multi-casting, and the network can save resources by transmitting only one set of data to numerous receivers at a time. In this study, we implemented multi-casting in the virtual cord protocol (VCP), which uses virtual coordinates (VC) to improve effective routing and control wireless data transmission. We have improved the classic VCP protocol by making it so that intermediate nodes can also forward or re-transmit the dataset to interested nodes. This improves data transmission from the sender to multiple receivers. Simulation results proved efficacy of our proposed enhanced virtual cord protocol-based multi-casting strategy over traditional VCP protocol and helped in reduction of number of MAC transmissions, minimization of end-to-end delay, and maximization of packet delivery ratio. Full article
(This article belongs to the Special Issue Feature Papers in Computers 2023)
Show Figures

Figure 1

11 pages, 4322 KiB  
Article
Application of Somatosensory Computer Game for Nutrition Education in Preschool Children
by Ing-Chau Chang and Chin-En Yen
Computers 2023, 12(1), 20; https://doi.org/10.3390/computers12010020 - 16 Jan 2023
Cited by 1 | Viewed by 2218
Abstract
With the popularization of technological products, people’s everyday lives are now full of 3C (computer, communication, and consumer electronics) products. Children have gradually become acquainted with these new technological products. In recent years, more somatosensory games have been introduced along with the development [...] Read more.
With the popularization of technological products, people’s everyday lives are now full of 3C (computer, communication, and consumer electronics) products. Children have gradually become acquainted with these new technological products. In recent years, more somatosensory games have been introduced along with the development of new media puzzle games for children. Several studies have shown that somatosensory games can improve physical, brain, and sensory integrated development in children, as well as promoting parent–child and peer interactions and enhancing children’s attention and cooperation in play. The purpose of this study is to assess the effect of integrating somatosensory computer games into early childhood nutrition education. The subjects of this study were 15 preschool children (aged 5–6 years old) from a preschool in Taichung City, Taiwan. We used the somatosensory game “Arno’s Fruit and Vegetable Journey” as an intervention tool for early childhood nutrition education. The somatosensory game production uses the Scratch software combined with Rabboni sensors. The somatosensory game education intervention was carried out for one hour a week over two consecutive weeks. We used questionnaires and nutrition knowledge learning sheets to evaluate the early childhood nutrition knowledge and learning status and satisfaction degree in the first and second weeks of this study. The results showed that there were no statistically significant differences between the preschool children’s game scores and times, as well as nutritional knowledge scores, before and after the intervention. Most of the preschool children highly enjoyed the somatosensory game educational activities. We reveal some problems in the teaching activities of somatosensory games, which can provide a reference for future research on designing and producing somatosensory games for preschool children and somatosensory game-based education. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

15 pages, 874 KiB  
Article
Supervised Machine Learning Models for Liver Disease Risk Prediction
by Elias Dritsas and Maria Trigka
Computers 2023, 12(1), 19; https://doi.org/10.3390/computers12010019 - 13 Jan 2023
Cited by 18 | Viewed by 5585
Abstract
The liver constitutes the largest gland in the human body and performs many different functions. It processes what a person eats and drinks and converts food into nutrients that need to be absorbed by the body. In addition, it filters out harmful substances [...] Read more.
The liver constitutes the largest gland in the human body and performs many different functions. It processes what a person eats and drinks and converts food into nutrients that need to be absorbed by the body. In addition, it filters out harmful substances from the blood and helps tackle infections. Exposure to viruses or dangerous chemicals can damage the liver. When this organ is damaged, liver disease can develop. Liver disease refers to any condition that causes damage to the liver and may affect its function. It is a serious condition that threatens human life and requires urgent medical attention. Early prediction of the disease using machine learning (ML) techniques will be the point of interest in this study. Specifically, in the content of this research work, various ML models and Ensemble methods were evaluated and compared in terms of Accuracy, Precision, Recall, F-measure and area under the curve (AUC) in order to predict liver disease occurrence. The experimental results showed that the Voting classifier outperforms the other models with an accuracy, recall, and F-measure of 80.1%, a precision of 80.4%, and an AUC equal to 88.4% after SMOTE with 10-fold cross-validation. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain)
Show Figures

Figure 1

17 pages, 294 KiB  
Article
A Testset-Based Method to Analyse the Negation-Detection Performance of Lexicon-Based Sentiment Analysis Tools
by Maurizio Naldi and Sandra Petroni
Computers 2023, 12(1), 18; https://doi.org/10.3390/computers12010018 - 13 Jan 2023
Cited by 4 | Viewed by 2025
Abstract
The correct detection of negations is essential to the performance of sentiment analysis tools. The evaluation of such tools is currently conducted through the use of corpora as an opportunistic approach. In this paper, we advocate using a different evaluation approach based on [...] Read more.
The correct detection of negations is essential to the performance of sentiment analysis tools. The evaluation of such tools is currently conducted through the use of corpora as an opportunistic approach. In this paper, we advocate using a different evaluation approach based on a set of intentionally built sentences that include negations, which aim to highlight those tools’ vulnerabilities. To demonstrate the effectiveness of this approach, we propose a basic testset of such sentences. We employ that testset to evaluate six popular sentiment analysis tools (with eight lexicons) available as packages in the R language distribution. By adopting a supervised classification approach, we show that the performance of most of these tools is largely unsatisfactory. Full article
Show Figures

Figure 1

17 pages, 924 KiB  
Article
PharmKE: Knowledge Extraction Platform for Pharmaceutical Texts Using Transfer Learning
by Nasi Jofche, Kostadin Mishev, Riste Stojanov, Milos Jovanovik, Eftim Zdravevski and Dimitar Trajanov
Computers 2023, 12(1), 17; https://doi.org/10.3390/computers12010017 - 09 Jan 2023
Cited by 3 | Viewed by 2566
Abstract
Even though named entity recognition (NER) has seen tremendous development in recent years, some domain-specific use-cases still require tagging of unique entities, which is not well handled by pre-trained models. Solutions based on enhancing pre-trained models or creating new ones are efficient, but [...] Read more.
Even though named entity recognition (NER) has seen tremendous development in recent years, some domain-specific use-cases still require tagging of unique entities, which is not well handled by pre-trained models. Solutions based on enhancing pre-trained models or creating new ones are efficient, but creating reliable labeled training for them to learn on is still challenging. In this paper, we introduce PharmKE, a text analysis platform tailored to the pharmaceutical industry that uses deep learning at several stages to perform an in-depth semantic analysis of relevant publications. The proposed methodology is used to produce reliably labeled datasets leveraging cutting-edge transfer learning, which are later used to train models for specific entity labeling tasks. By building models for the well-known text-processing libraries spaCy and AllenNLP, this technique is used to find Pharmaceutical Organizations and Drugs in texts from the pharmaceutical domain. The PharmKE platform also incorporates the NER findings to resolve co-references of entities and examine the semantic linkages in each phrase, creating a foundation for further text analysis tasks, such as fact extraction and question answering. Additionally, the knowledge graph created by DBpedia Spotlight for a specific pharmaceutical text is expanded using the identified entities. The obtained results with the proposed methodology result in about a 96% F1-score on the NER tasks, which is up to 2% better than those of the fine-tuned BERT and BioBERT models developed using the same dataset. The ultimate benefits of the platform are that pharmaceutical domain specialists may more easily identify the knowledge extracted from the input texts thanks to the platform’s visualization of the model findings. Likewise, the proposed techniques can be integrated into mobile and pervasive systems to give patients more relevant and comprehensive information from scanned medication guides. Similarly, it can provide preliminary insights to patients and even medical personnel on whether a drug from a different vendor is compatible with the patient’s prescription medication. Full article
Show Figures

Figure 1

16 pages, 2104 KiB  
Article
Topic Classification of Online News Articles Using Optimized Machine Learning Models
by Shahzada Daud, Muti Ullah, Amjad Rehman, Tanzila Saba, Robertas Damaševičius and Abdul Sattar
Computers 2023, 12(1), 16; https://doi.org/10.3390/computers12010016 - 09 Jan 2023
Cited by 11 | Viewed by 6598
Abstract
Much news is available online, and not all is categorized. A few researchers have carried out work on news classification in the past, and most of the work focused on fake news identification. Most of the work performed on news categorization is carried [...] Read more.
Much news is available online, and not all is categorized. A few researchers have carried out work on news classification in the past, and most of the work focused on fake news identification. Most of the work performed on news categorization is carried out on a benchmark dataset. The problem with the benchmark dataset is that model trained with it is not applicable in the real world as the data are pre-organized. This study used machine learning (ML) techniques to categorize online news articles as these techniques are cheaper in terms of computational needs and are less complex. This study proposed the hyperparameter-optimized support vector machines (SVM) to categorize news articles according to their respective category. Additionally, five other ML techniques, Stochastic Gradient Descent (SGD), Random Forest (RF), Logistic Regression (LR), K-Nearest Neighbor (KNN), and Naïve Bayes (NB), were optimized for comparison for the news categorization task. The results showed that the optimized SVM model performed better than other models, while without optimization, its performance was worse than other ML models. Full article
Show Figures

Figure 1

26 pages, 2681 KiB  
Article
Capacitated Waste Collection Problem Solution Using an Open-Source Tool
by Adriano Santos Silva, Filipe Alves, José Luis Diaz de Tuesta, Ana Maria A. C. Rocha, Ana I. Pereira, Adrián M. T. Silva and Helder T. Gomes
Computers 2023, 12(1), 15; https://doi.org/10.3390/computers12010015 - 07 Jan 2023
Cited by 3 | Viewed by 1911
Abstract
Population in cities is growing worldwide, which puts the systems that offer basic services to citizens under pressure. Among these systems, the Municipal Solid Waste Management System (MSWMS) is also affected. Waste collection and transportation is the first task in an MSWMS, carried [...] Read more.
Population in cities is growing worldwide, which puts the systems that offer basic services to citizens under pressure. Among these systems, the Municipal Solid Waste Management System (MSWMS) is also affected. Waste collection and transportation is the first task in an MSWMS, carried out traditionally in most cases. This approach leads to inefficient resource and time expense since routes are prescheduled or defined upon drivers’ choices. The waste collection is recognized as an NP-hard problem that can be modeled as a Capacitated Waste Collection Problem (CWCP). Despite the good quality of works currently available in the literature, the execution time of algorithms is often forgotten, and faster algorithms are required to increase the feasibility of the solutions found. In this paper, we show the performance of the open-source Google OR-Tools to solve the CWCP in Bragança, Portugal (inland city). The three metaheuristics available in this tool were able to reduce significantly the cost associated with waste collection in less than 2 s of execution time. The result obtained in this work proves the applicability of the OR-Tools to be explored for waste collection problems considering bigger systems. Furthermore, the fast response can be useful for developing new platforms for dynamic vehicle routing problems that represent scenarios closer to the real one. We anticipate the proven efficacy of OR-Tools to solve CWCP as the starting point of developments toward applying optimization algorithms to solve real and dynamic problems. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2022)
Show Figures

Figure 1

25 pages, 1829 KiB  
Article
Text-to-Ontology Mapping via Natural Language Processing with Application to Search for Relevant Ontologies in Catalysis
by Lukáš Korel, Uladzislau Yorsh, Alexander S. Behr, Norbert Kockmann and Martin Holeňa
Computers 2023, 12(1), 14; https://doi.org/10.3390/computers12010014 - 06 Jan 2023
Cited by 4 | Viewed by 5033
Abstract
The paper presents a machine-learning based approach to text-to-ontology mapping. We explore a possibility of matching texts to the relevant ontologies using a combination of artificial neural networks and classifiers. Ontologies are formal specifications of the shared conceptualizations of application domains. While describing [...] Read more.
The paper presents a machine-learning based approach to text-to-ontology mapping. We explore a possibility of matching texts to the relevant ontologies using a combination of artificial neural networks and classifiers. Ontologies are formal specifications of the shared conceptualizations of application domains. While describing the same domain, different ontologies might be created by different domain experts. To enhance the reasoning and data handling of concepts in scientific papers, finding the best fitting ontology regarding description of the concepts contained in a text corpus. The approach presented in this work attempts to solve this by selection of a representative text paragraph from a set of scientific papers, which are used as data set. Then, using a pre-trained and fine-tuned Transformer, the paragraph is embedded into a vector space. Finally, the embedded vector becomes classified with respect to its relevance regarding a selected target ontology. To construct representative embeddings, we experiment with different training pipelines for natural language processing models. Those embeddings in turn are later used in the task of matching text to ontology. Finally, the result is assessed by compressing and visualizing the latent space and exploring the mappings between text fragments from a database and the set of chosen ontologies. To confirm the differences in behavior of the proposed ontology mapper models, we test five statistical hypotheses about their relative performance on ontology classification. To categorize the output from the Transformer, different classifiers are considered. These classifiers are, in detail, the Support Vector Machine (SVM), k-Nearest Neighbor, Gaussian Process, Random Forest, and Multilayer Perceptron. Application of these classifiers in a domain of scientific texts concerning catalysis research and respective ontologies, the suitability of the classifiers is evaluated, where the best result was achieved by the SVM classifier. Full article
Show Figures

Graphical abstract

11 pages, 10664 KiB  
Article
Multistage Spatial Attention-Based Neural Network for Hand Gesture Recognition
by Abu Saleh Musa Miah, Md. Al Mehedi Hasan, Jungpil Shin, Yuichi Okuyama and Yoichi Tomioka
Computers 2023, 12(1), 13; https://doi.org/10.3390/computers12010013 - 05 Jan 2023
Cited by 21 | Viewed by 3234
Abstract
The definition of human-computer interaction (HCI) has changed in the current year because people are interested in their various ergonomic devices ways. Many researchers have been working to develop a hand gesture recognition system with a kinetic sensor-based dataset, but their performance accuracy [...] Read more.
The definition of human-computer interaction (HCI) has changed in the current year because people are interested in their various ergonomic devices ways. Many researchers have been working to develop a hand gesture recognition system with a kinetic sensor-based dataset, but their performance accuracy is not satisfactory. In our work, we proposed a multistage spatial attention-based neural network for hand gesture recognition to overcome the challenges. We included three stages in the proposed model where each stage is inherited the CNN; where we first apply a feature extractor and a spatial attention module by using self-attention from the original dataset and then multiply the feature vector with the attention map to highlight effective features of the dataset. Then, we explored features concatenated with the original dataset for obtaining modality feature embedding. In the same way, we generated a feature vector and attention map in the second stage with the feature extraction architecture and self-attention technique. After multiplying the attention map and features, we produced the final feature, which feeds into the third stage, a classification module to predict the label of the correspondent hand gesture. Our model achieved 99.67%, 99.75%, and 99.46% accuracy for the senz3D, Kinematic, and NTU datasets. Full article
(This article belongs to the Special Issue Feature Papers in Computers 2023)
Show Figures

Figure 1

12 pages, 1489 KiB  
Article
CLCD-I: Cross-Language Clone Detection by Using Deep Learning with InferCode
by Mohammad A. Yahya and Dae-Kyoo Kim
Computers 2023, 12(1), 12; https://doi.org/10.3390/computers12010012 - 04 Jan 2023
Cited by 7 | Viewed by 1933
Abstract
Source code clones are common in software development as part of reuse practice. However, they are also often a source of errors compromising software maintainability. The existing work on code clone detection mainly focuses on clones in a single programming language. However, nowadays [...] Read more.
Source code clones are common in software development as part of reuse practice. However, they are also often a source of errors compromising software maintainability. The existing work on code clone detection mainly focuses on clones in a single programming language. However, nowadays software is increasingly developed on a multilanguage platform on which code is reused across different programming languages. Detecting code clones in such a platform is challenging and has not been studied much. In this paper, we present CLCD-I, a deep neural network-based approach for detecting cross-language code clones by using InferCode which is an embedding technique for source code. The design of our model is twofold: (a) taking as input InferCode embeddings of source code in two different programming languages and (b) forwarding them to a Siamese architecture for comparative processing. We compare the performance of CLCD-I with LSTM autoencoders and the existing approaches on cross-language code clone detection. The evaluation shows the CLCD-I outperforms LSTM autoencoders by 30% on average and the existing approaches by 15% on average. Full article
Show Figures

Graphical abstract

23 pages, 5895 KiB  
Article
The Fifteen Puzzle—A New Approach through Hybridizing Three Heuristics Methods
by Dler O. Hasan, Aso M. Aladdin, Hardi Sabah Talabani, Tarik Ahmed Rashid and Seyedali Mirjalili
Computers 2023, 12(1), 11; https://doi.org/10.3390/computers12010011 - 03 Jan 2023
Cited by 3 | Viewed by 4959
Abstract
The Fifteen Puzzle problem is one of the most classical problems that has captivated mathematics enthusiasts for centuries. This is mainly because of the huge size of the state space with approximately 1013 states that have to be explored, and several algorithms [...] Read more.
The Fifteen Puzzle problem is one of the most classical problems that has captivated mathematics enthusiasts for centuries. This is mainly because of the huge size of the state space with approximately 1013 states that have to be explored, and several algorithms have been applied to solve the Fifteen Puzzle instances. In this paper, to manage this large state space, the bidirectional A* (BA*) search algorithm with three heuristics, such as Manhattan distance (MD), linear conflict (LC), and walking distance (WD), has been used to solve the Fifteen Puzzle problem. The three mentioned heuristics will be hybridized in a way that can dramatically reduce the number of states generated by the algorithm. Moreover, all these heuristics require only 25 KB of storage, but help the algorithm effectively reduce the number of generated states and expand fewer nodes. Our implementation of the BA* search can significantly reduce the space complexity, and guarantee either optimal or near-optimal solutions. Full article
Show Figures

Figure 1

19 pages, 6571 KiB  
Article
Improved Optimization Algorithm in LSTM to Predict Crop Yield
by Usharani Bhimavarapu, Gopi Battineni and Nalini Chintalapudi
Computers 2023, 12(1), 10; https://doi.org/10.3390/computers12010010 - 03 Jan 2023
Cited by 12 | Viewed by 3922
Abstract
Agriculture is the main occupation across the world with a dependency on rainfall. Weather changes play a crucial role in crop yield and were used to predict the yield rate by considering precipitation, wind, temperature, and solar radiation. Accurate early crop yield prediction [...] Read more.
Agriculture is the main occupation across the world with a dependency on rainfall. Weather changes play a crucial role in crop yield and were used to predict the yield rate by considering precipitation, wind, temperature, and solar radiation. Accurate early crop yield prediction helps market pricing, planning labor, transport, and harvest organization. The main aim of this study is to predict crop yield accurately. The incorporation of deep learning models along with crop statistics can predict yield rates accurately. We proposed an improved optimizer function (IOF) to get an accurate prediction and implemented the proposed IOF with the long short-term memory (LSTM) model. Manual data was collected between 1901 and 2000 from local agricultural departments for training, and from 2001 to 2020 from government websites of Andhra Pradesh (India) for testing purposes. The proposed model is compared with eight standard methods of learning, and outcomes revealed that the training error is small with the proposed IOF as it handles the underfitting and overfitting issues. The performance metrics used to compare the loss after implementing the proposed IOF were r, RMSE, and MAE, and the achieved results are r of 0.48, RMSE of 2.19, and MAE of 25.4. The evaluation was performed between the predicted crop yield and the actual yield and was measured in RMSE (kg/ha). The results show that the proposed IOF in LSTM has the advantage of crop yield prediction with accurate prediction. The reduction of RMSE for the proposed model indicates that the proposed IOFLSTM can outperform the CNN, RNN, and LSTM in crop yield prediction. Full article
Show Figures

Figure 1

16 pages, 292 KiB  
Article
Factors Affecting mHealth Technology Adoption in Developing Countries: The Case of Egypt
by Ghada Refaat El Said
Computers 2023, 12(1), 9; https://doi.org/10.3390/computers12010009 - 28 Dec 2022
Cited by 4 | Viewed by 2262
Abstract
Mobile health apps are seeing rapid growth in the potential to improve access to healthcare services for disadvantaged communities, while enhancing the efficiency of the healthcare delivery value chain. Still, the adoption of mHealth apps is relatively low, especially in developing countries. In [...] Read more.
Mobile health apps are seeing rapid growth in the potential to improve access to healthcare services for disadvantaged communities, while enhancing the efficiency of the healthcare delivery value chain. Still, the adoption of mHealth apps is relatively low, especially in developing countries. In Egypt, an initiative for national-level healthcare coverage was launched in 2021, accompanied by a rise in mHealth start-ups. However, many of these projects did not progress beyond the pilot stage, with very little known about the antecedents of mHealth adoption for the Egyptian user. Semi-structured interviews were conducted with 22 Egyptians, aiming to uncover factors affecting the use of mHealth apps for Egyptian citizens. Some of these factors were introduced by previous studies, such as Perceived Service Quality, Perceived Risk, Perceived Ease of Use, and Trust. Others were not well established in the mHealth research strand, such as Perceived Reputation and Perceived Familiarity, while Governance, Personalized Experience, Explain-ability, Interaction, Language, and Cultural Issues, are novel factors introduced by the current research. The effect of these suggested independent variables on the willingness to adopt mHealth apps was validated using a survey administered to 150 Egyptians, confirming the significant positive effect of most of these factors on mHealth adoption in Egypt. This research contributes to methodology by introducing novel constructs in the mHealth research context, which might be specific to the target developing country. Practical implications were suggested for designers and healthcare service providers might increase the adoption of their apps in developing countries, such as Egypt. Full article
(This article belongs to the Special Issue Future Systems Based on Healthcare 5.0 for Pandemic Preparedness)
12 pages, 4561 KiB  
Article
An IoT-Based Deep Learning Framework for Real-Time Detection of COVID-19 through Chest X-ray Images
by Mithun Karmakar, Bikramjit Choudhury, Ranjan Patowary and Amitava Nag
Computers 2023, 12(1), 8; https://doi.org/10.3390/computers12010008 - 28 Dec 2022
Cited by 1 | Viewed by 1676
Abstract
Over the next decade, Internet of Things (IoT) and the high-speed 5G network will be crucial in enabling remote access to the healthcare system for easy and fast diagnosis. In this paper, an IoT-based deep learning computer-aided diagnosis (CAD) framework is proposed for [...] Read more.
Over the next decade, Internet of Things (IoT) and the high-speed 5G network will be crucial in enabling remote access to the healthcare system for easy and fast diagnosis. In this paper, an IoT-based deep learning computer-aided diagnosis (CAD) framework is proposed for online and real-time COVID-19 identification. The proposed work first fine-tuned the five state-of-the-art deep CNN models such as Xception, ResNet50, DenseNet201, MobileNet, and VGG19 and then combined these models into a majority voting deep ensemble CNN (DECNN) model in order to detect COVID-19 accurately. The findings demonstrate that the suggested framework, with a test accuracy of 98%, outperforms other relevant state-of-the-art methodologies in terms of overall performance. The proposed CAD framework has the potential to serve as a decision support system for general clinicians and rural health workers in order to diagnose COVID-19 at an early stage. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

13 pages, 1567 KiB  
Article
Framework of Meta-Heuristic Variable Length Searching for Feature Selection in High-Dimensional Data
by Tara Othman Qadir Saraf, Norfaiza Fuad and Nik Shahidah Afifi Md Taujuddin
Computers 2023, 12(1), 7; https://doi.org/10.3390/computers12010007 - 27 Dec 2022
Cited by 2 | Viewed by 1665
Abstract
Feature Selection in High Dimensional Space is a combinatory optimization problem with an NP-hard nature. Meta-heuristic searching with embedding information theory-based criteria in the fitness function for selecting the relevant features is used widely in current feature selection algorithms. However, the increase in [...] Read more.
Feature Selection in High Dimensional Space is a combinatory optimization problem with an NP-hard nature. Meta-heuristic searching with embedding information theory-based criteria in the fitness function for selecting the relevant features is used widely in current feature selection algorithms. However, the increase in the dimension of the solution space leads to a high computational cost and risk of convergence. In addition, sub-optimality might occur due to the assumption of a certain length of the optimal number of features. Alternatively, variable length searching enables searching within the variable length of the solution space, which leads to more optimality and less computational load. The literature contains various meta-heuristic algorithms with variable length searching. All of them enable searching in high dimensional problems. However, an uncertainty in their performance exists. In order to fill this gap, this article proposes a novel framework for comparing various variants of variable length-searching meta-heuristic algorithms in the application of feature selection. For this purpose, we implemented four types of variable length meta-heuristic searching algorithms, namely VLBHO-Fitness, VLBHO-Position, variable length particle swarm optimization (VLPSO) and genetic variable length (GAVL), and we compared them in terms of classification metrics. The evaluation showed the overall superiority of VLBHO over the other algorithms in terms of accomplishing lower fitness values when optimizing mathematical functions of the variable length type. Full article
(This article belongs to the Topic Artificial Intelligence Models, Tools and Applications)
Show Figures

Figure 1

30 pages, 2945 KiB  
Review
Blockchain-Based Internet of Things: Review, Current Trends, Applications, and Future Challenges
by Tanweer Alam
Computers 2023, 12(1), 6; https://doi.org/10.3390/computers12010006 - 26 Dec 2022
Cited by 12 | Viewed by 8619
Abstract
Advances in technology always had an impact on our lives. Several emerging technologies, most notably the Internet of Things (IoT) and blockchain, present transformative opportunities. The blockchain is a decentralized, transparent ledger for storing transaction data. By effectively establishing trust between nodes, it [...] Read more.
Advances in technology always had an impact on our lives. Several emerging technologies, most notably the Internet of Things (IoT) and blockchain, present transformative opportunities. The blockchain is a decentralized, transparent ledger for storing transaction data. By effectively establishing trust between nodes, it has the remarkable potential to design unique architectures for most enterprise applications. When it first appeared as a platform for anonymous cryptocurrency trading, such as Bitcoin, on a public network platform, blockchain piqued the interest of researchers. The chain is completed when each block connects to the previous block. The Internet of Things (IoT) is a network of networked devices that can exchange data and be managed and controlled via unique identifiers. Automation, wireless sensor networks, embedded systems, and control systems are just a few of the well-known technologies that power the IoT. Converging advancements in real-time analytics, machine learning, commodity sensors, and embedded systems demonstrate the rapid expansion of the IoT paradigm. The Internet of Things refers to the global networking of millions of networked smart gadgets that gather and exchange data. Integrating the IoT and blockchain technology would be a significant step toward developing a reliable, secure, and comprehensive method of storing data collected by smart devices. Internet-enabled devices in the IoT can send data to private blockchain networks, creating immutable records of all transaction history. As a result, these networks produce unchangeable logs of all transactions. This research looks at how blockchain technology and the Internet of Things interact to understand better how devices can communicate with one another. The blockchain-enabled Internet of Things architecture proposed in this article is a useful framework for integrating blockchain technology and the Internet of Things using the most cutting-edge tools and methods currently available. This article discusses the principles of blockchain-based IoT, consensus methods, reviews, difficulties, prospects, applications, trends, and communication between IoT nodes in an integrated framework. Full article
Show Figures

Figure 1

15 pages, 2061 KiB  
Article
The Readiness of Lasem Batik Small and Medium Enterprises to Join the Metaverse
by Theresia Dwi Hastuti, Ridwan Sanjaya and Freddy Koeswoyo
Computers 2023, 12(1), 5; https://doi.org/10.3390/computers12010005 - 26 Dec 2022
Cited by 9 | Viewed by 2212
Abstract
Today’s business competitiveness necessitates the capacity of all company players, particularly small and medium enterprises (SMEs), to enter a broader market through information technology. However, the Lasem Batik SMEs have endured a great deal of turmoil during the COVID-19 pandemic. Marketing has been [...] Read more.
Today’s business competitiveness necessitates the capacity of all company players, particularly small and medium enterprises (SMEs), to enter a broader market through information technology. However, the Lasem Batik SMEs have endured a great deal of turmoil during the COVID-19 pandemic. Marketing has been conducted through physical and internet channels, but the results have not been maximized. The purpose of this research was to consider the possibilities of Lasem Batik SMEs adopting metaverse technology as a marketing medium to enhance sales. The investigation was conducted on 40 Lasem Batik SMEs who met the requirements of using online media to sell their products, having a medium-sized firm, and displaying marketing that has reached the provincial level. The findings of this study are as follows: (1) The majority of participants stated that the metaverse is a virtual 3D space. This understanding is deepened by discussions about virtual 3D spaces that combine VR and AR, which today is often referred to as the metaverse. (2) Batik business owners hope that by using the metaverse, they will be able to obtain many benefits, especially related to market expansion. (3) Lasem Batik SMEs show great interest in expanding their marketing channels to a wider area; Lasem Batik entrepreneurs also accept the challenge of studying the metaverse with new knowledge and techniques they have never considered. (4) Overall, 75% of participants were ready to use the metaverse, and 25% still required guidance. (5) Local communities, universities, and large corporations provide great support for the use of the metaverse. (6) The commercial success of Lasem Batik SMEs is defined by product quality; ongoing online and offline advertising; originality and innovation; and the capacity to capitalize on possibilities, retain local wisdom, and preserve strong customer connections. The main conclusion is that the readiness of batik entrepreneurs to use the metaverse is highly dependent on the support of various parties. A strong desire to progress and develop one’s business is the main factor determining one’s intention to use the metaverse. As a result of the research, a prototype of a metaverse platform for Lasem Batik exhibitions has been developed. SMEs can use the room template provided by the platform and join other SMEs to hold a metaverse exhibition to attract global customers. These results can be connected to create a metaverse exhibition to attract global customers. Full article
Show Figures

Figure 1

15 pages, 1797 KiB  
Article
Batch Gradient Learning Algorithm with Smoothing L1 Regularization for Feedforward Neural Networks
by Khidir Shaib Mohamed
Computers 2023, 12(1), 4; https://doi.org/10.3390/computers12010004 - 23 Dec 2022
Viewed by 1390
Abstract
Regularization techniques are critical in the development of machine learning models. Complex models, such as neural networks, are particularly prone to overfitting and to performing poorly on the training data. L1 regularization is the most extreme way to enforce sparsity, but, regrettably, [...] Read more.
Regularization techniques are critical in the development of machine learning models. Complex models, such as neural networks, are particularly prone to overfitting and to performing poorly on the training data. L1 regularization is the most extreme way to enforce sparsity, but, regrettably, it does not result in an NP-hard problem due to the non-differentiability of the 1-norm. However, the L1 regularization term achieved convergence speed and efficiency optimization solution through a proximal method. In this paper, we propose a batch gradient learning algorithm with smoothing L1 regularization (BGSL1) for learning and pruning a feedforward neural network with hidden nodes. To achieve our study purpose, we propose a smoothing (differentiable) function in order to address the non-differentiability of L1 regularization at the origin, make the convergence speed faster, improve the network structure ability, and build stronger mapping. Under this condition, the strong and weak convergence theorems are provided. We used N-dimensional parity problems and function approximation problems in our experiments. Preliminary findings indicate that the BGSL1 has convergence faster and good generalization abilities when compared with BGL1/2, BGL1, BGL2, and BGSL1/2. As a result, we demonstrate that the error function decreases monotonically and that the norm of the gradient of the error function approaches zero, thereby validating the theoretical finding and the supremacy of the suggested technique. Full article
Show Figures

Figure 1

23 pages, 465 KiB  
Article
Experiments with Active-Set LP Algorithms Allowing Basis Deficiency
by Pablo Guerrero-García and Eligius M. T. Hendrix
Computers 2023, 12(1), 3; https://doi.org/10.3390/computers12010003 - 23 Dec 2022
Viewed by 1198
Abstract
An interesting question for linear programming (LP) algorithms is how to deal with solutions in which the number of nonzero variables is less than the number of rows of the matrix in standard form. An approach is that of basis deficiency-allowing (BDA) simplex [...] Read more.
An interesting question for linear programming (LP) algorithms is how to deal with solutions in which the number of nonzero variables is less than the number of rows of the matrix in standard form. An approach is that of basis deficiency-allowing (BDA) simplex variations, which work with a subset of independent columns of the coefficient matrix in standard form, wherein the basis is not necessarily represented by a square matrix. We describe one such algorithm with several variants. The research question deals with studying the computational behaviour by using small, extreme cases. For these instances, we must wonder which parameter setting or variants are more appropriate. We compare the setting of two nonsimplex active-set methods with Holmström’s TomLab LpSimplex v3.0 commercial sparse primal simplex commercial implementation. All of them update a sparse QR factorization in Matlab. The first two implementations require fewer iterations and provide better solution quality and running time. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2022)
Show Figures

Figure 1

41 pages, 742 KiB  
Article
Privacy-Enhanced AKMA for Multi-Access Edge Computing Mobility
by Gizem Akman, Philip Ginzboorg, Mohamed Taoufiq Damir and Valtteri Niemi
Computers 2023, 12(1), 2; https://doi.org/10.3390/computers12010002 - 20 Dec 2022
Cited by 1 | Viewed by 2307
Abstract
Multi-access edge computing (MEC) is an emerging technology of 5G that brings cloud computing benefits closer to the user. The current specifications of MEC describe the connectivity of mobile users and the MEC host, but they have issues with application-level security and privacy. [...] Read more.
Multi-access edge computing (MEC) is an emerging technology of 5G that brings cloud computing benefits closer to the user. The current specifications of MEC describe the connectivity of mobile users and the MEC host, but they have issues with application-level security and privacy. We consider how to provide secure and privacy-preserving communication channels between a mobile user and a MEC application in the non-roaming case. It includes protocols for registration of the user to the main server of the MEC application, renewal of the shared key, and usage of the MEC application in the MEC host when the user is stationary or mobile. For these protocols, we designed a privacy-enhanced version of the 5G authentication and key management for applications (AKMA) service. We formally verified the current specification of AKMA using ProVerif and found a new spoofing attack as well as other security and privacy vulnerabilities. Then we propose a fix against the spoofing attack. The privacy-enhanced AKMA is designed considering these shortcomings. We formally verified the privacy-enhanced AKMA and adapted it to our solution. Full article
(This article belongs to the Special Issue Computational Science and Its Applications 2022)
Show Figures

Figure 1

25 pages, 17445 KiB  
Article
Estimation of Excitation Current of a Synchronous Machine Using Machine Learning Methods
by Matko Glučina, Nikola Anđelić, Ivan Lorencin and Zlatan Car
Computers 2023, 12(1), 1; https://doi.org/10.3390/computers12010001 - 20 Dec 2022
Cited by 2 | Viewed by 3829
Abstract
A synchronous machine is an electro-mechanical converter consisting of a stator and a rotor. The stator is the stationary part of a synchronous machine that is made of phase-shifted armature windings in which voltage is generated and the rotor is the rotating part [...] Read more.
A synchronous machine is an electro-mechanical converter consisting of a stator and a rotor. The stator is the stationary part of a synchronous machine that is made of phase-shifted armature windings in which voltage is generated and the rotor is the rotating part made using permanent magnets or electromagnets. The excitation current is a significant parameter of the synchronous machine, and it is of immense importance to continuously monitor possible value changes to ensure the smooth and high-quality operation of the synchronous machine itself. The purpose of this paper is to estimate the excitation current on a publicly available dataset, using the following input parameters: Iy: load current; PF: power factor; e: power factor error; and df: changing of excitation current of synchronous machine, using artificial intelligence algorithms. The algorithms used in this research were: k-nearest neighbors, linear, random forest, ridge, stochastic gradient descent, support vector regressor, multi-layer perceptron, and extreme gradient boost regressor, where the worst result was elasticnet, with R2 = −0.0001, MSE = 0.0297, and MAPE = 0.1442; the best results were provided by extreme boosting regressor, with R2¯ = 0.9963, MSE¯ = 0.0001, and MAPE¯ = 0.0057, respectively. Full article
(This article belongs to the Special Issue Human Understandable Artificial Intelligence)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop