Computation doi: 10.3390/computation11060108

Authors: Fatemeh Mollaamin Majid Monajjemi

In this study, we investigated the abilities of nitrogen and sulfur heterocyclic carbenes of benzotriazole, 2-mercaptobenzothiazole, 8-hydroxyquinoline, and 3-amino-1, 2, 4-triazole-5-thiol regarding adsorption on an Al-Mg-Si alloy toward corrosion inhibition of the surface. Al-Si(14), Al-Si(19), and Al-Si(21) in the Al-Mg-Si alloy surface with the highest fluctuation in the shielding tensors of the &ldquo;NMR&rdquo; spectrum generated by intra-atomic interaction directed us to the most influence in the neighbor atoms generated by interatomic reactions of N&rarr; Al, O&rarr; Al, and S&rarr; Al through the coating and adsorbing process of Langmuir adsorption. The values of various thermodynamic properties and dipole moments of benzotriazole, 2-mercaptobenzothiazole, 8-hydroxyquinoline, and 3-amino-1,2,4-triazole-5-thiol adsorbed on the Al-Mg-Si increased by enhancing the molecular weight of these compounds as well as the charge distribution between organic compounds (electron donor) and the alloy surface (electron acceptor). Finally, this research can build up our knowledge of the electronic structure, relative stability, and surface bonding of various metal alloy surfaces, metal-doped alloy nanosheets, and other dependent mechanisms such as heterogeneous catalysis, friction lubrication, and biological systems.

]]>Computation doi: 10.3390/computation11060107

Authors: Carlos De Las Morenas Mateos Rafael Lahoz-Beltra

Today, graph theory represents one of the most important modeling techniques in biology. One of the most important applications is in the study of metabolic networks. During metabolism, a set of sequential biochemical reactions takes place, which convert one or more molecules into one or more final products. In a biochemical reaction, the transformation of one metabolite into the next requires a class of proteins called enzymes that are responsible for catalyzing the reaction. Whether by applying differential equations or automata theory, it is not easy to explain how the evolution of metabolic networks could have taken place within living organisms. Obviously, in the past, the assembly of biochemical reactions into a metabolic network depended on the independent evolution of the enzymes involved in the isolated biochemical reactions. In this work, a simulation model is presented where enzymes are modeled as automata, and their evolution is simulated with a genetic algorithm. This protocol is applied to the evolution of glycolysis and the Krebs cycle, two of the most important metabolic networks for the survival of organisms. The results obtained show how Darwinian evolution is able to optimize a biological network, such as in the case of glycolysis and Krebs metabolic networks.

]]>Computation doi: 10.3390/computation11060106

Authors: Aristotelis P. Sgouros Doros N. Theodorou

Mesoscopic simulations of long polymer chains and soft matter systems are conducted routinely in the literature in order to assess the long-lived relaxation processes manifested in these systems. Coarse-grained chains are, however, prone to unphysical intercrossing due to their inherent softness. This issue can be resolved by introducing long intermolecular bonds (the so-called slip-springs) which restore these topological constraints. The separation vector of intermolecular bonds can be determined by enforcing the commonly adopted minimum image convention (MIC). Because these bonds are soft and long (ca 3&ndash;20 nm), subjecting the samples to extreme deformations can lead to topology violations when enforcing the MIC. We propose the fixed image convention (FIC) for determining the separation vectors of overextended bonds, which is more stable than the MIC and applicable to extreme deformations. The FIC is simple to implement and, in general, more efficient than the MIC. Side-by-side comparisons between the MIC and FIC demonstrate that, when using the FIC, the topology remains intact even in situations with extreme particle displacement and nonaffine deformation. The accuracy of these conventions is the same when applying affine deformation. The article is accompanied by the corresponding code for implementing the FIC.

]]>Computation doi: 10.3390/computation11060105

Authors: Rebecca Distefano Mirolyuba Ilieva Jens Hedelund Madsen Shizuka Uchida

Crohn disease (CD) is a type of inflammatory bowel disease that causes inflammation in the digestive tract. Cases of CD are increasing worldwide, calling for more research to elucidate the pathogenesis of CD. For this purpose, the usage of the RNA-sequencing (RNA-seq) technique is increasingly appreciated, as it captures RNA expression patterns at a particular time point in a high-throughput manner. Although many RNA-seq datasets are generated from CD patients and compared to those of healthy donors, most of these datasets are analyzed only for protein-coding genes, leaving non-coding RNAs (ncRNAs) undiscovered. Long non-coding RNAs (lncRNAs) are any ncRNAs that are longer than 200 nucleotides. Interest in studying lncRNAs is increasing rapidly, as lncRNAs bind other macromolecules (DNA, RNA, and/or proteins) to finetune signaling pathways. To fill the gap in knowledge about lncRNAs in CD, we performed secondary analysis of published RNA-seq data of CD patients compared to healthy donors to identify lncRNA genes and their expression changes. To further facilitate lncRNA research in CD, we built a web database, CrohnDB, to provide a one-stop-shop for expression profiling of protein-coding and lncRNA genes in CD patients compared to healthy donors.

]]>Computation doi: 10.3390/computation11060104

Authors: Abrar Alotaibi Lujain Alnajrani Nawal Alsheikh Alhatoon Alanazy Salam Alshammasi Meshael Almusairii Shoog Alrassan Aisha Alansari

Hepatitis C is a liver infection caused by a virus, which results in mild to severe inflammation of the liver. Over many years, hepatitis C gradually damages the liver, often leading to permanent scarring, known as cirrhosis. Patients sometimes have moderate or no symptoms of liver illness for decades before developing cirrhosis. Cirrhosis typically worsens to the point of liver failure. Patients with cirrhosis may also experience brain and nerve system damage, as well as gastrointestinal hemorrhage. Treatment for cirrhosis focuses on preventing further progression of the disease. Detecting cirrhosis earlier is therefore crucial for avoiding complications. Machine learning (ML) has been shown to be effective at providing precise and accurate information for use in diagnosing several diseases. Despite this, no studies have so far used ML to detect cirrhosis in patients with hepatitis C. This study obtained a dataset consisting of 28 attributes of 2038 Egyptian patients from the ML Repository of the University of California at Irvine. Four ML algorithms were trained on the dataset to diagnose cirrhosis in hepatitis C patients: a Random Forest, a Gradient Boosting Machine, an Extreme Gradient Boosting, and an Extra Trees model. The Extra Trees model outperformed the other models achieving an accuracy of 96.92%, a recall of 94.00%, a precision of 99.81%, and an area under the receiver operating characteristic curve of 96% using only 16 of the 28 features.

]]>Computation doi: 10.3390/computation11050103

Authors: Vesa Kuikka

We present a generalised complex contagion model for describing behaviour and opinion spreading on social networks. Recurrent interactions between adjacent nodes and circular influence in loops in the network structure enable the modelling of influence spreading on the network scale. We have presented details of the model in our earlier studies. Here, we focus on the interpretation of the model and discuss its features by using conventional concepts in the literature. In addition, we discuss how the model can be extended to account for specific social phenomena in social networks. We demonstrate the differences between the results of our model and a simple contagion model. Results are provided for a small social network and a larger collaboration network. As an application of the model, we present a method for profiling individuals based on their out-centrality, in-centrality, and betweenness values in the social network structure. These measures have been defined consistently with our spreading model based on an influence spreading matrix. The influence spreading matrix captures the directed spreading probabilities between all node pairs in the network structure. Our results show that recurrent and circular influence has considerable effects on node centrality values and spreading probabilities in the network structure.

]]>Computation doi: 10.3390/computation11050102

Authors: Paraskevi K. Askouni

In common construction practice, various examples can be found involving a building type consisting of a lower, older, reinforced concrete structure and a more recent upper steel part, forming a so-called &ldquo;hybrid&rdquo; building. Conventional seismic design rules give full guidelines for the earthquake design of buildings constructed with the same material throughout. The current seismic codes neglect to provide specific design and detailing guidelines for vertical hybrid buildings and limited existing research is available in the literature, thus leaving a scientific gap that needs to be investigated. In the present work, an effort is made to fill this gap in the knowledge about the behavior of this hybrid building type in sequential earthquakes, which are found in the literature to burden the seismic structural response. Three-dimensional models of hybrid reinforced concrete&ndash;steel frames are exposed to sequential ground excitations in horizontal and vertical directions while considering the elastoplastic behavior of these structural elements in the time domain. The lower reinforced concrete parts of the hybrid buildings are detailed here as corresponding to a former structure by a simple approximation. In addition, two boundary connections of the structural steel part upon the r/c part are distinguished for examination in the elastoplastic analyses. Comparisons of the arithmetical analysis results of the hybrid frames for the examined connections are carried out. The seismic response plots of the current non-linear dynamic time-domain analyses of the 3D hybrid frames subjected to sequential ground excitations yield useful conclusions to provide guidelines for a safer seismic design of the hybrid building type, which is not covered by the current codes despite being a common practice.

]]>Computation doi: 10.3390/computation11050101

Authors: Brian Mintz Feng Fu

Cultures around the world show varying levels of conservatism. While maintaining traditional ideas prevents wrong ones from being embraced, it also slows or prevents adaptation to new times. Without exploration there can be no improvement, but often this effort is wasted as it fails to produce better results, making it better to exploit the best known option. This tension is known as the exploration/exploitation issue, and it occurs at the individual and group levels, whenever decisions are made. As such, it is has been investigated across many disciplines. We extend previous work by approximating a continuum of traits under local exploration, employing the method of adaptive dynamics, and studying multiple fitness functions. In this work, we ask how nature would solve the exploration/exploitation issue, by allowing natural selection to operate on an exploration parameter in a variety of contexts, thinking of exploration as mutation in a trait space with a varying fitness function. Specifically, we study how exploration rates evolve by applying adaptive dynamics to the replicator-mutator equation, under two types of fitness functions. For the first, payoffs are accrued from playing a two-player, two-action symmetric game, we consider representatives of all games in this class, including the Prisoner&rsquo;s Dilemma, Hawk-Dove, and Stag Hunt games, finding exploration rates often evolve downwards, but can also undergo neutral selection as well depending on the games parameters or initial conditions. Second, we study time dependent fitness with a function having a single oscillating peak. By increasing the period, we see a jump in the optimal exploration rate, which then decreases towards zero as the frequency of environmental change increases. These results establish several possible evolutionary scenarios for exploration rates, providing insight into many applications, including why we can see such diversity in rates of cultural change.

]]>Computation doi: 10.3390/computation11050100

Authors: Rungwasun Kraiklang Chakat Chueadee Ganokgarn Jirasirilerd Worapot Sirirak Sarayut Gonwirat

This study presents a methodology that combines artificial multiple intelligence systems (AMISs) and machine learning to forecast the ultimate tensile strength (UTS), maximum hardness (MH), and heat input (HI) of AA-5083 and AA-6061 friction stir welding. The machine learning model integrates two machine learning methods, Gaussian process regression (GPR) and a support vector machine (SVM), into a single model, and then uses the AMIS as the decision fusion strategy to merge SVM and GPR. The generated model was utilized to anticipate three objectives based on seven controlled/input parameters. These parameters were: tool tilt angle, rotating speed, travel speed, shoulder diameter, pin geometry, type of reinforcing particles, and tool pin movement mechanism. The effectiveness of the model was evaluated using a two-experiment framework. In the first experiment, we used two newly produced datasets, (1) the 7PI-V1 dataset and (2) the 7PI-V2 dataset, and compared the results with state-of-the-art approaches. The second experiment used existing datasets from the literature with varying base materials and parameters. The computational results revealed that the proposed method produced more accurate prediction results than the previous methods. For all datasets, the proposed strategy outperformed existing methods and state-of-the-art processes by an average of 1.35% to 6.78%.

]]>Computation doi: 10.3390/computation11050099

Authors: Evgeny Nikulchev Alexander Chervyakov

The task of time series forecasting is to estimate future values based on available observational data. Prediction Intervals methods are aimed at finding not the next point, but the interval that the future value or several values on the forecast horizon can fall into given current and historical data. This article proposes an approach for modeling a robust interval forecast for a stock portfolio. Here, a trading strategy was developed to profit from trading stocks in the market. The study used real trading data of real stocks. Forty securities were used to calculate the IMOEX. The securities with the highest weight were the following: GAZP, LKOH, SBER. This definition of the strategy allows operating with large portfolios. Increasing the accuracy of the forecast was carried out by estimating the interval of the forecast. Here, a range of values was considered to be a result of forecasting without considering specific moments, which guarantees the reliability of the forecast. The use of a predictive interval approach for the price of shares allows increasing their profitability.

]]>Computation doi: 10.3390/computation11050098

Authors: Carlos Balsa Murilo M. Breve Carlos V. Rodrigues José Rufino

The reconstruction or prediction of meteorological records through the Analog Ensemble (AnEn) method is very efficient when the number of predictor time series is small. Thus, in order to take advantage of the richness and diversity of information contained in a large number of predictors, it is necessary to reduce their dimensions. This study presents methods to accomplish such reduction, allowing the use of a high number of predictor variables. In particular, the techniques of Principal Component Analysis (PCA) and Partial Least Squares (PLS) are used to reduce the dimension of the predictor dataset without loss of essential information. The combination of the AnEn and PLS techniques results in a very efficient hybrid method (PLSAnEn) for reconstructing or forecasting unstable meteorological variables, such as wind speed. This hybrid method is computationally demanding but its performance can be improved via parallelization or the introduction of variants in which all possible analogs are previously clustered. The multivariate linear regression methods used on the new variables resulting from the PCA or PLS techniques also proved to be efficient, especially for the prediction of meteorological variables without local oscillations, such as the pressure.

]]>Computation doi: 10.3390/computation11050097

Authors: Jose Juan Garcia-Hernandez Miguel Morales-Sandoval Erick Elizondo-Rodríguez

In the big data era, processing large amounts of data imposes several challenges, mainly in terms of performance. Complex operations in data science, such as deep learning, large-scale simulations, and visualization applications, can consume a significant amount of computing time. Heterogeneous computing is an attractive alternative for algorithm acceleration, using not one but several different kinds of computing devices (CPUs, GPUs, or FPGAs) simultaneously. Accelerating an algorithm for a specific device under a specific framework, i.e., CUDA/GPU, provides a solution with the highest possible performance at the cost of a loss in generality and requires an experienced programmer. On the contrary, heterogeneous computing allows one to hide the details pertaining to the simultaneous use of different technologies in order to accelerate computation. However, effective heterogeneous computing implementation still requires mastering the underlying design flow. Aiming to fill this gap, in this paper we present a heterogeneous computing platform (HCP). Regarding its main features, this platform allows non-experts in heterogeneous computing to deploy, run, and evaluate high-computational-demand algorithms following a semi-automatic design flow. Given the implementation of an algorithm in C with minimal format requirements, the platform automatically generates the parallel code using a code analyzer, which is adapted to target a set of available computing devices. Thus, while an experienced heterogeneous computing programmer is not required, the process can run over the available computing devices on the platform as it is not an ad hoc solution for a specific computing device. The proposed HCP relies on the OpenCL specification for interoperability and generality. The platform was validated and evaluated in terms of generality and efficiency through a set of experiments using the algorithms of the Polybench/C suite (version 3.2) as the input. Different configurations for the platform were used, considering CPUs only, GPUs only, and a combination of both. The results revealed that the proposed HCP was able to achieve accelerations of up to 270&times; for specific classes of algorithms, i.e., parallel-friendly algorithms, while its use required almost no expertise in either OpenCL or heterogeneous computing from the programmer/end-user.

]]>Computation doi: 10.3390/computation11050096

Authors: Methaporn Phongying Sasiprapa Hiriote

Machine learning techniques play an increasingly prominent role in medical diagnosis. With the use of these techniques, patients&rsquo; data can be analyzed to find patterns or facts that are difficult to explain, making diagnoses more reliable and convenient. The purpose of this research was to compare the efficiency of diabetic classification models using four machine learning techniques: decision trees, random forests, support vector machines, and K-nearest neighbors. In addition, new diabetic classification models are proposed that incorporate hyperparameter tuning and the addition of some interaction terms into the models. These models were evaluated based on accuracy, precision, recall, and the F1-score. The results of this study show that the proposed models with interaction terms have better classification performance than those without interaction terms for all four machine learning techniques. Among the proposed models with interaction terms, random forest classifiers had the best performance, with 97.5% accuracy, 97.4% precision, 96.6% recall, and a 97% F1-score. The findings from this study can be further developed into a program that can effectively screen potential diabetes patients.

]]>Computation doi: 10.3390/computation11050095

Authors: Varadarajan Rengaraj Sebastian Jost Franz Bethke Christian Plessl Hossein Mirhosseini Andrea Walther Thomas D. Kühne

Predicting the chemical stability of yet-to-be-discovered materials is an important aspect of the discovery and development of virtual materials. The conventional approach for computing the enthalpy of formation based on ab initio methods is time consuming and computationally demanding. In this regard, alternative machine learning approaches are proposed to predict the formation energies of different classes of materials with decent accuracy. In this paper, one such machine learning approach, a novel two-step method that predicts the formation energy of ternary compounds, is presented. In the first step, with a classifier, we determine the accuracy of heuristically calculated formation energies in order to increase the size of the training dataset for the second step. The second step is a regression model that predicts the formation energy of the ternary compounds. The first step leads to at least a 100% increase in the size of the dataset with respect to the data available in the Materials Project database. The results from the regression model match those from the existing state-of-the-art prediction models. In addition, we propose a slightly modified version of the Adam optimizer, namely centered Adam, and report the results from testing the centered Adam optimizer.

]]>Computation doi: 10.3390/computation11050094

Authors: Agustín Moreno Cañadas Odette M. Mendez Juan-Carlos Riaño-Rojas Juan-David Hormaza

The open shop scheduling problem (OSSP) is one of the standard scheduling problems. It consists of scheduling jobs associated with a finite set of tasks developed by different machines. In this case, each machine processes at most one operation at a time, and the job processing order on the machines does not matter. The goal is to determine the completion times of the operations processed on the machines to minimize the largest job completion time, called Cmax. This paper proves that each OSSP has associated a path algebra called Brauer configuration algebra whose representation theory (particularly its dimension and the dimension of its center) can be given using the corresponding Cmax value. It has also been proved that the dimension of the centers of Brauer configuration algebras associated with OSSPs with minimal Cmax are congruent modulo the number of machines.

]]>Computation doi: 10.3390/computation11050093

Authors: Shivam Verma Gurpreet Singh Arnab Chanda

The human spine is susceptible to a wide variety of adverse consequences from vibrations, including lower back discomfort. These effects are often seen in the drivers of vehicles, earth-moving equipment, and trucks, and also in those who drive for long hours in general. The human spine is composed of vertebrae, discs, and tissues that work together to provide it with a wide range of movements and significant load-carrying capability needed for daily physical exercise. However, there is a limited understanding of vibration characteristics in different age groups and the effect of vibration transmission in the spinal column, which may be harmful to the different sections. In this work, a novel finite element model (FEM) was developed to study the variation of vibration absorption capacity due to the aging effect of the different sections of the human spine. These variations were observed from the first three natural frequencies of the human spine structure, which were obtained by solving the eigenvalue problem of the novel finite element model for different ages. From the results, aging was observed to lead to an increase in the natural frequencies of all three spinal segments. As the age increased beyond 30 years, the natural frequency significantly increased for the thoracic segment, compared to lumber and cervical segments. A range of such novel findings indicated the harmful frequencies at which resonance may occur, causing spinal pain and possible injuries. This information would be indispensable for spinal surgeons for the prognosis of spinal column injury (SCI) patients affected by harmful vibrations from workplaces, as well as manufacturers of automotive and aerospace equipment for designing effective dampers for better whole-body vibration mitigation.

]]>Computation doi: 10.3390/computation11050092

Authors: Claire Jean-Quartier Katharina Bein Lukas Hejny Edith Hofer Andreas Holzinger Fleur Jeanquartier

In response to socioeconomic development, the number of machine learning applications has increased, along with the calls for algorithmic transparency and further sustainability in terms of energy efficient technologies. Modern computer algorithms that process large amounts of information, particularly artificial intelligence methods and their workhorse machine learning, can be used to promote and support sustainability; however, they consume a lot of energy themselves. This work focuses and interconnects two key aspects of artificial intelligence regarding the transparency and sustainability of model development. We identify frameworks for measuring carbon emissions from Python algorithms and evaluate energy consumption during model development. Additionally, we test the impact of explainability on algorithmic energy consumption during model optimization, particularly for applications in health and, to expand the scope and achieve a widespread use, civil engineering and computer vision. Specifically, we present three different models of classification, regression and object-based detection for the scenarios of cancer classification, building energy, and image detection, each integrated with explainable artificial intelligence (XAI) or feature reduction. This work can serve as a guide for selecting a tool to measure and scrutinize algorithmic energy consumption and raise awareness of emission-based model optimization by highlighting the sustainability of XAI.

]]>Computation doi: 10.3390/computation11050091

Authors: Rafiq Bodalal Farag Shuaeib

In this study, the newly developed Marine Predators Algorithm (MPA) is formulated to minimize the weight of truss structures. MPA is a swarm-based metaheuristic algorithm inspired by the efficient foraging strategies of marine predators in oceanic environments. In order to assess the robustness of the proposed method, three normal-sized structural benchmarks (10-bar, 60-bar, and 120-bar spatial dome) and three large-scale structures (272-bar, 942-bar, and 4666-bar truss tower) were selected from the literature. Results point to the inherent strength of MPA against all state-of-the-art metaheuristic optimizers implemented so far. Moreover, for the first time in the field, a quantitative evaluation and an answer to the age-old question of the proper convergence behavior (exploration vs. exploitation balance) in the context of structural optimization is conducted. Therefore, a novel dimension-wise diversity index is adopted as a methodology to investigate each of the two schemes. It was concluded that the balance that produced the best results was about 90% exploitation and 10% exploration (on average for the entire computational process).

]]>Computation doi: 10.3390/computation11050090

Authors: Apostolos Kotzinos Vasilios Canellidis Dimitrios Psychoyios

We examine the main effects of ICT penetration and the shadow economy on sovereign credit ratings and the cost of debt, along with possible second-order effects between the two variables, on a dataset of 65 countries from 2001 to 2016. The paper presents a range of machine-learning approaches, including bagging, random forests, gradient-boosting machines, and recurrent neural networks. Furthermore, following recent trends in the emerging field of interpretable ML, based on model-agnostic methods such as feature importance and accumulated local effects, we attempt to explain which factors drive the predictions of the so-called ML black box models. We show that policies facilitating the penetration and use of ICT and aiming to curb the shadow economy may exert an asymmetric impact on sovereign ratings and the cost of debt depending on their present magnitudes, not only independently but also in interaction.

]]>Computation doi: 10.3390/computation11050089

Authors: Tzu-Hsin Liu He-Yao Hsu Jau-Chuan Ke Fu-Min Chang

This work considers a preemptive priority queueing system with vacation, where the single server may break down with imperfect coverage. Various combinations of server vacation priority queueing models have been studied by many scholars. A common assumption in these models is that the server will only resume its normal service rate after the vacation is over. However, such speculation is more limited in real-world situations. Hence, in this study, the vacation will be interrupted if a customer waits for service in the system at the moment of completion of service during vacation. The stationary probability distribution is derived by using the probability generating function approach. We also develop varieties of performance measures and provide a simple numerical example to illustrate these measures. Optimization analysis is finally carried out, including cost optimization and tri-object optimization.

]]>Computation doi: 10.3390/computation11050088

Authors: Priyanka Chauhan Gururaj Kudur Jayaprakash Isha Soni Mamta Sharma Juan Pablo Mojica-Sànchez Shashanka Rajendrachari Praveen Naik

In the current work, globally based on Koopmans&rsquo; approximation, local electron transport characteristics of dihydroxybenzenes have been examined using the density functional theory for understanding their antioxidant activity. Our experimental and theoretical studies show that hydroquinone has better antioxidant activities when compared to resorcinol and catechol. To identify the antioxidant sites for each dihydroxybenzene molecule, an average analytical Fukui analysis was used. The typical Fukui analytical results demonstrate that dihydroxybenzene oxygen atoms serve as antioxidant sites. The experimental and theoretical results are in good agreement with each other; therefore, our results are reliable.

]]>Computation doi: 10.3390/computation11050087

Authors: P. V. Dunchenkin V. A. Cherekaeva T. V. Yakovleva A. V. Krysko

This study focuses on the topological optimization of adhesive overlap joints for structures subjected to longitudinal mechanical loads. The aim is to reduce peak stresses at the joint interface of the elements. Peak stresses in such joints can lead to failure of both the joint and the structure itself. A new approach based on Rational Approximation of Material Properties (RAMP) and the Finite Element Method (FEM) has been proposed to minimize peak stresses in multi-layer composite joints. Using this approach, the Mises peak stresses of the optimal structural joint have been significantly reduced by up to 50% under mechanical loading in the longitudinal direction. The paper includes numerical examples of different types of structural element connections.

]]>Computation doi: 10.3390/computation11050086

Authors: Carlos Montenegro Víctor Medina Helbert Espitia

Automatic emotion identification allows for obtaining information on emotions experienced by an individual during certain activities, which is essential for improving their performance or preparing for similar experiences. This document aims to establish the clusters of variables associated with the identification of emotions when a group of students takes a foreign language exam in Portuguese. Once the data clusters are determined, it is possible to establish the perception of emotions in the students with relevant variables and their respective decision thresholds. This study can later be used to build a model that relates the measured variables and the student&rsquo;s performance so that strategies can be generated to help the student achieve better results on the test. The results indicate that the clusters and range values of the variables can be obtained to observe changes in the concentration of the students. This preliminary information can be used to design a fuzzy inference system to identify the student&rsquo;s state of concentration.

]]>Computation doi: 10.3390/computation11050085

Authors: Artem Obukhov Denis Dedov Andrey Volkov Daniil Teselkin

In virtual reality (VR) systems, a problem is the accurate reproduction of the user&rsquo;s body in a virtual environment using inverse kinematics because existing motion capture systems have a number of drawbacks, and minimizing the number of key tracking points (KTPs) leads to a large error. To solve this problem, it is proposed to use the concept of a digital shadow and machine learning technologies to optimize the number of KTPs. A technique for movement process data collecting from a virtual avatar is implemented, modeling of nonlinear dynamic processes of human movement based on a digital shadow is carried out, the problem of optimizing the number of KTP is formulated, and an overview of the applied machine learning algorithms and metrics for their evaluation is given. An experiment on a dataset formed from virtual avatar movements shows the following results: three KTPs do not provide sufficient reconstruction accuracy, the choice of five or seven KTPs is optimal; among the algorithms, the most efficient in descending order are AdaBoostRegressor, LinearRegression, and SGDRegressor. During the reconstruction using AdaBoostRegressor, the maximum deviation is not more than 0.25 m, and the average is not more than 0.10 m.

]]>Computation doi: 10.3390/computation11040084

Authors: Fatemeh Mollaamin

In this article, monkeypox is studied as a zoonotic poxvirus disease which can occur in humans and other animals due to substitution of the amino acid serine with methionine. We investigate the (+)-catechin, betulinic acid, ursolic acid, quercetin-3-O-galactoside, luteolin-7-O-glucoside, and myricetin in Sarracenia purpurea drugs from Sarraceniaceae family for treating monkeypox disease. This is performed via adsorption onto the surface of (6,6) armchair single-walled carbon nanotube (SWCNT) at the B3LYP/6-311+G (2d,p) level of theory in a water medium as the drug delivery method at 300 K. Sarracenia purpurea has attracted much attention for use in the clinical treatment of monkeypox disease due to the adsorption of its effective compounds of (+)-catechin, betulinic acid, ursolic acid, quercetin-3-O-galactoside, luteolin-7-O-glucoside, and myricetin onto the surface of (6,6) armchair SWCNT, a process which introduces an efficient drug delivery system though NMR, IR and UV-VIS data analysis to the optimized structure. In addition to the lowering of the energy gap (&#8710;E = E LUMO &minus; EHOMO), HOMO&ndash;LUMO energy has illustrated the charge transfer interactions taking place within (+)-catechin, betulinic acid, ursolic acid, quercetin-3-O-galactoside, luteolin-7-O-glucoside, and myricetin. The atomic charges have provided the proper perception of molecular theory and the energies of fundamental molecular orbitals.

]]>Computation doi: 10.3390/computation11040083

Authors: Eakasit Sritham Navaphattra Nunak Ekarin Ongwongsakul Jedsada Chaishome Gerhard Schleining Taweepol Suesut

The formation of fouling deposits on heat exchanger surfaces is one of the major concerns in thermal processes. The fouling behavior of food materials is complex, and its mechanism remains, in general, unclear. This study was aimed at developing a predictive model for soymilk fouling deposit formed on heated surfaces using dimensional analysis. Relevant variables affecting fouling deposit mass could be grouped into six dimensionless terms using Buckingham&rsquo;s pi-theorem. Experimental data were obtained from a lab-scale plate heat exchanger. A simple model developed using the experimental data under the process conditions with the product inlet temperature, the product outlet temperature, and plate surface temperature in the ranges of 50&ndash;55 &deg;C, 65&ndash;70 &deg;C, and 70&ndash;85 &deg;C, respectively, exhibited a good performance in the prediction of soymilk fouled mass. The correlation coefficient between the predicted and experimental values of fouled mass was 0.97 with an average relative error of 9.03%. Within the ranges of product inlet temperature and plate surfaces temperature studied, this model offers an opportunity to estimate soymilk fouling mass with acceptable accuracy.

]]>Computation doi: 10.3390/computation11040082

Authors: Thomas Zisis Konstantinos Vasilopoulos Ioannis Sarris

The current study examines how different types of passengers (elders, travelers with luggage, travelers without luggage, and mixed population) affect the evacuation process in railway tunnels after a fire accident based on Fractional Effective Dose (FED) index values. A 20 MW diesel pool fire in an immobilized train located inside a straight, rectangular railroad tunnel that is ventilated by a longitudinal jet fan ventilation system is the scenario under consideration. Two fire scenarios were examined, one with and one without ventilation, combined with four evacuation scenarios. The numerical simulation of the fire and the evacuation process is conducted with the Fire Dynamics Simulator and Evacuation code (FDS + Evac) which is a Large Eddy Simulator (LES) for low-Mach thermally driven flows. The results (evacuation times, walking speeds, and mean and max FED values) are compared for each passenger type. It is found that during the evacuation from a railway tunnel fire accident, the most affected population are the elderly because of their lower movement speed, and travelers with luggage because of their increased dimensions. It is also shown that a non-homogenous population has increased uptake of combustion products and longer evacuation times than a homogenous population with similar geometrical characteristics.

]]>Computation doi: 10.3390/computation11040081

Authors: Dominika Petríková Ivan Cimrák

Deep learning (DL) and convolutional neural networks (CNNs) have achieved state-of-the-art performance in many medical image analysis tasks. Histopathological images contain valuable information that can be used to diagnose diseases and create treatment plans. Therefore, the application of DL for the classification of histological images is a rapidly expanding field of research. The popularity of CNNs has led to a rapid growth in the number of works related to CNNs in histopathology. This paper aims to provide a clear overview for better navigation. In this paper, recent DL-based classification studies in histopathology using strongly annotated data have been reviewed. All the works have been categorized from two points of view. First, the studies have been categorized into three groups according to the training approach and model construction: 1. fine-tuning of pre-trained networks for one-stage classification, 2. training networks from scratch for one-stage classification, and 3. multi-stage classification. Second, the papers summarized in this study cover a wide range of applications (e.g., breast, lung, colon, brain, kidney). To help navigate through the studies, the classification of reviewed works into tissue classification, tissue grading, and biomarker identification was used.

]]>Computation doi: 10.3390/computation11040080

Authors: Cira Perna Marilena Sibillo

Comparison and cultural exchange always enrich and produce innovative and interesting results [...]

]]>Computation doi: 10.3390/computation11040079

Authors: Mohammad Afazal Shubham Gupta Abhishek Tevatia Saba Afreen Arnab Chanda

Dental trauma is a serious and highly prevalent health issue across the globe. Most of the frequent dental injuries result in the loss of teeth and affects the overall quality of life. The loss of a tooth is usually compensated by a dental implant. The common methods adopted while placing the implant tooth are platform switching and platform matching. A plethora of works has studied the qualitative performance of these methods across different situations clinically. However, a detailed comparative work studying in-depth the mechanical parameters has not been attempted yet. In this computational work, two commonly available different platform-switched and one platform-matched implant-abutment configurations were compared. A 3D model of an implant (5.5 &times; 9.5 mm) was designed and inserted into a human mandibular bone block using computer-aided design (CAD) and extracting the clinical imaging data. Three separate models of implant-abutment configurations such as Platform Switched (PS)-I, a 5.5 mm implant with a 3.8 mm wide abutment, Platform Switched (PS)-II, a 5.5 mm implant with a 4.5 mm wide abutment, and Platform Matched (PM), a 5.5-mm implant with a 5.5 mm wide abutment were analyzed. Clinically relevant vertical-, horizontal-, and oblique-type of occlusal loadings were applied to each model to characterize the mechanical response. Mechanical parameters such as von Mises stresses, deformations, and strain energies were obtained using finite element modeling (FEM). These parameters showed lower values for platform switching within the peri-implant bone and that may help to limit marginal bone loss. However, the same parameters were increasing more in the abutment, implant, and screw for the platform-switched implant configuration than that of platform-matched configuration. The computational framework, along with the results, are anticipated to guide the clinicians and medical practitioners in making better decisions while selecting the commonly available methods.

]]>Computation doi: 10.3390/computation11040078

Authors: Sarah El Himer Mariyam Ouaissa Mariya Ouaissa Moez Krichen Mohannad Alswailim Mutiq Almutiq

This work aims to create a web-based real-time monitoring system for electrical energy consumption inside a specific residence. This electrical energy is generated from a micro-CPV system lying on the roof of this residence. The micro-CPV is composed of a Fresnel lens as the main optical element, a spherical lens as the secondary optical element, and a multi-junction solar cell. A tiny photovoltaic concentrator system with a geometric concentration ratio of 100&times; is analyzed in the first part of this study, while the second part is designed to monitor the electricity generated by the micro-CPV system. An ESP8266 controller chipset is used to build the sensing peripheral node, which controls a relay and a PZEM-004T current sensor. As a result, the optical element used has approximately 83% optical efficiency, with an acceptance angle of 1.5&deg;. Regarding the monitoring system, the architecture demonstrates the ability of the system to monitor current and energy consumption in real time using a computer or smartphone and a web server specially designed to continuously update the power consumption profile in a specific smart home environment. The whole electric power consumption monitoring system generally worked well. The monitoring system is configured to provide excellent accuracy for a 0.6% hit.

]]>Computation doi: 10.3390/computation11040077

Authors: Georgios R. K. Aretis Apostolos A. Gkountas Dimitrios G. Koubogiannis Ioannis E. Sarris

Waste heat recovery is one of the main practices used to reduce the carbon footprint of the industrial sector regarding environmental concern. The supercritical carbon dioxide (s-CO2) cycle is one of the most attractive heat-to-power technologies; due to the abrupt variation in CO2 properties in the vicinity of its critical point, small compression work is required and finally a high cycle efficiency is achieved. In the literature, among the various proposed layouts, the recompression s-CO2 Brayton cycle is considered to be the most efficient one. The most critical component of such a cycle is definitely the main compressor, as the related usual design procedures have been developed in the past for ideal gas as a working fluid. This study presents a methodology for the preliminary design of a centrifugal compressor with a vaned diffuser, suitable for fulfilling the desired operating requirements of a particular supercritical CO2 recompression Brayton cycle. Furthermore, it demonstrates the numerical investigation of the three-dimensional (3D) flow phenomena occurring in it, focusing on the investigation of possible condensation. To this end, a one-dimensional flow model was developed to provide information regarding the geometry of the compressor and predict its prospective performance. Commercial computational fluid dynamics (CFD) software was then employed to examine the three-dimensional flow. The effect of accuracy in the evaluation of real gas properties approaching the critical point was examined, showing that a look-up table with more points around the critical point can reduce the numerical relative error by up to 0.3% for the value of specific heat capacity. In addition, the possibility of condensation occurrence was investigated at the impeller&rsquo;s inlet, where the flow is accelerated. The supersaturation pressure ratio was defined and implemented in order to identify regions where static pressure is lower than saturation pressure, possibly leading to local two-phase flow.

]]>Computation doi: 10.3390/computation11040076

Authors: A. Ushasree Vipul Agarwal

This paper presents a novel design for and an experimental study of a dual-polarized quad-port MIMO antenna. The design achieves resonance at five distinct frequency bands with reduced mutual coupling. The design includes a single annular ring slot, four truncated rectangular corners, and a truncated aperture to improve resonance behavior. The design is then extended to a four-port MIMO antenna by including a ground-plane slit to enhance isolation between antenna elements at the center resonance band. The antenna achieves resonances at 5 distinct bands, ranging from 1.5 to 8.4 GHz, with significant mutual coupling reductions. The resonances of the quad-port pentaband MIMO antenna are achieved at 1.55 GHz (1.5–1.65 GHz), 2.5 GHz (2.4–2.7 GHz), 5.2 GHz (5–5.85), 7.3 GHz (7.1–7.4), and 8.15 GHz (7.9–8.4), with respective mutual coupling reductions of 27 dB, 37 dB, 21 dB, 29 dB, and 21 dB. Additionally, the 3 dB axial ratio bandwidth (ARBW) is observed at 6.5% (1.5–1.6 GHz) and 15% (2.4–2.7 GHz) in 2 distinct bands, and the envelope correlation coefficient and diversity gain are calculated within the specified band range. Experimental measurements of the prototype for the quad-port antenna are conducted, with excellent agreement found between the results and the simulations.

]]>Computation doi: 10.3390/computation11040075

Authors: Ashraf Khalil Asma Alfergani Farhat M. Shaltami Ali Asheibi

In this paper, the robust stabilization for the networked microgrid system is presented. A microgrid implements master-slave control architecture where the communication channel is utilized to exchange the reference current signals. With this structure, a time delay exists in the reference control signal which may lead to instability. The analysis of the control strategy is carried out in dq reference frame. The microgrid is constituted by PV and wind energy sources supplying a load through voltage source inverters. The stochastic nature of renewable energy sources introduces uncertainties which can be represented as fluctuations in the voltage and the current. The main contribution of the paper is formulating the controller design of the microgrid with communication delay and uncertainties in the model as H&infin; control problem and Lyapunov&ndash;Krasovskii functional is utilized to develop stability criterion in bilinear matrix inequality form. Grey wolf optimizer is used to minimize the performance index and derive the stabilizing controller. The microgrid performance is tested through simulation using the time-varying nonlinear model of the microgrid. The results prove that satisfactory current and power-sharing are attained even with the existence of time delays and uncertainties.

]]>Computation doi: 10.3390/computation11040074

Authors: Olga Timofeeva Alexey Sannikov Maria Stepanenko Tatiana Balashova

One of the actual tasks of the contemporary logistics business using the &ldquo;just in time&rdquo; supply planning concept, is to distribute manufactured goods among the objects of the distribution network in the most efficient manner at the lowest possible cost. The article is devoted to the problem of finding the optimal path in network structures. The problem statement for multilayer data transmission networks (MDTN), which is one of the possible representations of multimodal transport networks, is considered. Thus, each MDTN layer can be represented as a separate type of transport. The problem is solved by modifying the Bellman&ndash;Ford mathematical programming algorithm. Load testing of the modified method was performed, and a comparative analysis was given, including an assessment of speed and performance, proving the effectiveness of the results of the study. Based on the results of comparative analysis, recommendations for using a modified version of the Bellman&ndash;Ford algorithm for application in practical problems in optimizing logistics networks are proposed. The results obtained can be used in practice not only in logistics networks but also in the construction of smart energy networks, as well as in other subject areas that require optimization of multilayer graph structures.

]]>Computation doi: 10.3390/computation11040073

Authors: Adis Puška Anđelka Štilić Željko Stević

The focus of this study is on the significance of location in establishing distribution centers. The key question when selecting a location is regarding which location would contribute the most to the growth of a company&rsquo;s business through the establishment of distribution centers. To answer this question, we conducted research in the Br&#269;ko District of BiH in order to determine the best location for a distribution center using expert decision-making based on linguistic values. In order to use these values when selecting locations, a fuzzy set was formed using the IMF SWARA (Improved Fuzzy Stepwise Weight Assessment Ratio Analysis) and fuzzy CRADIS (Compromise Ranking of Alternatives from Distance to the Ideal Solution) methods. The IMF SWARA method was utilized to determine the weights of the criteria, and the fuzzy CRADIS method was employed to rank the locations based on expert ratings. The location for the construction of distribution centers at Bodari&scaron;te was rated the worst, while the McGowern Base location was rated the best. Based on these findings, the research question was answered, and it was demonstrated that fuzzy methods could be utilized in the selection of distribution center locations. Hence, we recommend that future research be performed on the application of fuzzy methods in the expert selection of potential sites for distribution centers.

]]>Computation doi: 10.3390/computation11040072

Authors: Hao Pang Gracious Ngaile

Although the full form of the Rayleigh&ndash;Plesset (RP) equation more accurately depicts the bubble behavior in a cavitating flow than its reduced form, it finds much less application than the latter in the computational fluid dynamic (CFD) simulation due to its high stiffness. The traditional variable time-step scheme for the full form RP equation is difficult to be integrated with the CFD program since it requires a tiny time step at the singularity point for convergence and this step size may be incompatible with time marching of conservation equations. This paper presents two stable and efficient numerical solution schemes based on the finite difference method and Euler method so that the full-form RP equation can be better accepted by the CFD program. By employing a truncation bubble radius to approximate the minimum bubble size in the collapse stage, the proposed schemes solve for the bubble radius and wall velocity in an explicit way. The proposed solution schemes are more robust for a wide range of ambient pressure profiles than the traditional schemes and avoid excessive refinement on the time step at the singularity point. Since the proposed solution scheme can calculate the effects of the second-order term, liquid viscosity, and surface tension on the bubble evolution, it provides a more accurate estimation of the wall velocity for the vaporization or condensation rate, which is widely used in the cavitation model in the CFD simulation. The legitimacy of the solution schemes is manifested by the agreement between the results from these schemes and established ones from the literature. The proposed solution schemes are more robust in face of a wide range of ambient pressure profiles.

]]>Computation doi: 10.3390/computation11040071

Authors: Dmitry Ammosov Maria Vasilyeva

This paper presents a thermo-mechanical model with phase transition considering changes in the mechanical properties of the medium. The proposed thermo-mechanical model is described by a system of partial differential equations for temperature and displacements. In the model, soil deformations occur due to porosity growth caused by ice and water density differences. A finite-element approximation of this model on a fine grid is presented. The linearization from the previous time step is used to handle the nonlinearity of the problem. For reducing the size of the discrete problem, offline and online multiscale approaches based on the Generalized Multiscale Finite Element Method (GMsFEM) are proposed. A two-dimensional model problem simulating the heaving process of heterogeneous soil with a stiff inclusion was considered for testing the mathematical model and the multiscale approaches. Numerical solutions depict the process of soil heaving caused by changes in porosity due to the phase transition. The movement of the phase transition interface was observed. The change of medium properties, including the elastic modulus, was traced and corresponds to the phase transition interface. The proposed multiscale approaches significantly reduce the size of the discrete problem while maintaining reasonable accuracy. However, the online multiscale approach achieves better accuracy than the offline approach with fewer degrees of freedom.

]]>Computation doi: 10.3390/computation11040070

Authors: Michele Bufalo Daniele Bufalo Giuseppe Orlando

In the field of cryptography, many algorithms rely on the computation of modular multiplicative inverses to ensure the security of their systems. In this study, we build upon our previous research by introducing a novel sequence, (zj)j&ge;0, that can calculate the modular inverse of a given pair of integers (a,n), i.e., a&minus;1;mod,n. The computational complexity of this approach is O(a), which is more efficient than the traditional Euler&rsquo;s phi function method, O(n,ln,n). Furthermore, we investigate the properties of the sequence (zj)j&ge;0 and demonstrate that all solutions of the problem belong to a specific set, I, that only contains the minimum values of (zj)j&ge;0. This results in a reduction of the computational complexity of our method, especially when a&sim;n and it also opens new opportunities for discovering closed-form solutions for the modular inverse.

]]>Computation doi: 10.3390/computation11040069

Authors: Lorenzo Cardone Stefano Quer

The Maximum Common Subgraph problem has been long proven NP-hard. Nevertheless, it has countless practical applications, and researchers are still searching for exact solutions and scalable heuristic approaches. Driven by applications in molecular science and cyber-security, we concentrate on the Maximum Common Subgraph among an indefinite number of graphs. We first extend a state-of-the-art branch-and-bound procedure working on two graphs to N graphs. Then, given the high computational cost of this approach, we trade off complexity for accuracy, and we propose a set of heuristics to approximate the exact solution for N graphs. We analyze sequential, parallel multi-core, and parallel-many core (GPU-based) approaches, exploiting several leveraging techniques to decrease the contention among threads, improve the workload balance of the different tasks, reduce the computation time, and increase the final result size. We also present several sorting heuristics to order the vertices of the graphs and the graphs themselves. We compare our algorithms with a state-of-the-art method on publicly available benchmark sets. On graph pairs, we are able to speed up the exact computation by a 2&times; factor, pruning the search space by more than 60%. On sets of more than two graphs, all exact solutions are extremely time-consuming and of a complex application in many real cases. On the contrary, our heuristics are far less expensive (as they show a lower-bound for the speed up of 10&times;), have a far better asymptotic complexity (with speed ups up to several orders of magnitude in our experiments), and obtain excellent approximations of the maximal solution with 98.5% of the nodes on average.

]]>Computation doi: 10.3390/computation11030068

Authors: MieowKee Chan Amin Shams ChanChin Wang PeiYi Lee Yousef Jahani Seyyed Ahmad Mirbagheri

Desalination is a sustainable method to solve global water scarcity. A Response Surface Methodology (RSM) approach is widely applied to optimize the desalination performance, but further investigations with additional inputs are restricted. An Artificial neuron network (ANN) method is proposed to reconstruct the parameters and demonstrate multivariate analysis. Graphene oxide (GO) content, Polyhedral Oligomeric Silsesquioxane (POSS) content, operating pressure, and salinity were combined as input parameters for a four-dimensional regression analysis to predict the three responses: contact angle, salt rejection, and permeation flux. Average coefficient of determination (R2) values ranged between 0.918 and 0.959. A mathematical equation was derived to find global max and min values. Three objective functions and three-dimensional diagrams were applied to optimize effective cost conditions. It served as the database for the membranologists to decide the amount of GO to be used to fabricate membranes by considering the effects of operating conditions such as salinity and pressure to achieve the desired salt rejection, permeation flux, contact angle, and cost. The finding suggested that a membrane with 0.0063 wt% of GO, operated at 14.2 atm for a 5501 ppm salt solution, is the preferred optimal condition to achieve high salt rejection and permeation flux simultaneously.

]]>Computation doi: 10.3390/computation11030067

Authors: Dionysios G. Karkoulias Panagiota-Vasiliki N. Bourdousi Dionissios P. Margaris

Minimizing the carbon footprint of the aviation industry is of critical importance for the forthcoming years, allowing the mitigation of climate change through fossil fuel economy. Significant progress toward this goal can be achieved through the aerodynamic optimization of wing surfaces. In a previous study, a custom-designed wing equipped with an Eppler 420 airfoil, including an appendant custom-designed blended winglet, was developed and studied in flight conditions. The present paper researches potential improvements to the aerodynamic behavior of this wing by attempting to regenerate the boundary layer. The main goal was to achieve passive control of the boundary layer, which would be approached by means of two different configurations. In the first case, dimples were added at the points where the separation of the boundary layer was expected, for the majority of the wing surface; in the second case, bumps of the same diameter were added at the same points. Both wings were studied in two different Reynolds (Re) numbers and five angles of attack (AoA). The computational fluid dynamics (CFD) simulations were implemented using a pressure-based solver, the spatial discretization was conducted with a second-order upwind scheme, and the k-omega SST (k-&omega; SST) turbulence model was applied by utilizing the pseudo-transient method. The experimental procedure was conducted in an open-type subsonic flow wind tunnel, for Reynolds 86,000, with 3D-printed models of the wings having undergone suitable surface treatment. The numerical and experimental results converged, showing a degradation in the wing&rsquo;s aerodynamic performance when bumps were implemented, as well as a slight improvement for the configuration with dimples.

]]>Computation doi: 10.3390/computation11030066

Authors: Ren Tang Chaoyang Zhang Kai Tang Xiaoyang He Qipeng He

Road lighting is one of the largest consumers of electric energy in cities. Research into energy-saving street lighting is of great significance to city sustainable development and economies, especially given that many countries are now in a period of energy shortage. The control system is critical for energy-saving street lighting, due to its capability to directly change output power. Here, we propose a control system with high intelligence and efficiency, by incorporating improved YOLOv5s with terminal embedded devices and designing a new dimming method. The improved YOLOv5s has more balanced performance in both detection accuracy and detection speed compared to other state-of-the-art detection models, and achieved the highest cognition recall of 67.94%, precision of 81.28%, 74.53%AP50, and frames per second (FPS) of 59 in the DAIR-V2X dataset. The proposed method achieves highly complete and intelligent dimming control based on the prediction labels of the improved YOLOv5s, and a high energy-saving efficiency was achieved during a two week-long lighting experiment. Furthermore, this system can also contribute to the construction of the Internet of Things, smart cities, and urban security. The proposed control system here offered a novel, high-performance, adaptable, and economical solution to road lighting.

]]>Computation doi: 10.3390/computation11030065

Authors: Abdo H. Guroob

This paper proposes a novel approach, EA2-IMDG (Efficient Approach of Using an In-Memory Data Grid) to improve the performance of replication and scheduling in grid environment systems. Grid environments are widely used for distributed computing, but they are often faced with the challenge of high data access latency and poor scalability. By utilizing an in-memory data grid (IMDG), the aim is to significantly reduce the data access latency and improve the resource utilization of the system. The approach uses the IMDG to store data in RAM, instead of on disk, allowing for faster data retrieval and processing. The IMDG is used to distribute data across multiple nodes, which helps to reduce the risk of data bottlenecks and improve the scalability of the system. To evaluate the proposed approach, a series of experiments were conducted, and its performance was compared with two baseline approaches: a centralized database and a centralized file system. The results of the experiments show that the EA2-IMDG approach improves the performance of replication and scheduling tasks by up to 90% in terms of data access latency and 50% in terms of resource utilization, respectively. These results suggest that the EA2-IMDG approach is a promising solution for improving the performance of grid environment systems.

]]>Computation doi: 10.3390/computation11030064

Authors: Marcella Corduas Domenico Piccolo

This article presents an innovative dynamic model that describes the probability distributions of ordered categorical variables observed over time. For this purpose, we extend the definition of the mixture distribution obtained from the combination of a uniform and a shifted binomial distribution (CUB model), introducing time-varying parameters. The model parameters identify the main components ruling the respondent evaluation process: the degree of attraction towards the object under assessment, the uncertainty related to the answer, and the weight of the refuge category that is selected when a respondent is unwilling to elaborate a thoughtful judgement. The method provides a tool to quantify the data from qualitative surveys. For illustrative purposes, the dynamic CUB model is applied to the consumers&rsquo; perceptions and expectations of inflation in Italy to investigate: (a) the effect of the COVID pandemic on inflation beliefs; (b) the impact of income level on respondents&rsquo; expectations.

]]>Computation doi: 10.3390/computation11030063

Authors: Baidaa Mutasher Rashed Nirvana Popescu

Today, medical image-based diagnosis has advanced significantly in the world. The number of studies being conducted in this field is enormous, and they are producing findings with a significant impact on humanity. The number of databases created in this field is skyrocketing. Examining these data is crucial to find important underlying patterns. Classification is an effective method for identifying these patterns. This work proposes a deep investigation and analysis to evaluate and diagnose medical image data using various classification methods and to critically evaluate these methods&rsquo; effectiveness. The classification methods utilized include machine-learning (ML) algorithms like artificial neural networks (ANN), support vector machine (SVM), k-nearest neighbor (KNN), decision tree (DT), random forest (RF), Na&iuml;ve Bayes (NB), logistic regression (LR), random subspace (RS), fuzzy logic and a convolution neural network (CNN) model of deep learning (DL). We applied these methods to two types of datasets: chest X-ray datasets to classify lung images into normal and abnormal, and melanoma skin cancer dermoscopy datasets to classify skin lesions into benign and malignant. This work aims to present a model that aids in investigating and assessing the effectiveness of ML approaches and DL using CNN in classifying the medical databases and comparing these methods to identify the most robust ones that produce the best performance in diagnosis. Our results have shown that the used classification algorithms have good results in terms of performance measures.

]]>Computation doi: 10.3390/computation11030062

Authors: Concetta Giaconia Aziz Chamas

Research and development efforts in the field of commercial applications have invested strategic interest in the design of intelligent systems that correctly handle out-of-stock events. An out-of-stock event refers to a scenario in which such customers do not have the availability of the products they want to buy. This scenario generates important economic damage to the producer and to the commercial store. Addressing the out-of-stock problem is currently of great interest in the commercial field as it would allow limiting the economic damages deriving from these events. Furthermore, in the era of online commerce (e-commerce), it would significantly limit out-of-stock events which show a considerable economic impact in the field. For these reasons, the authors proposed a solution based on deep learning for predicting the residual stock amount of a commercial product based on the intelligent analysis of specific visual&ndash;commercial data as well as seasonality. By means of a combined deep pipeline embedding convolutional architecture boosted with a self-attention mechanism and a downstream temporal convolutional network, the authors will be able to predict the remaining stock of a particular commodity. By integrating and interpreting climate/seasonal information, customers&rsquo; behavior data, and full history data on the dynamics of commercial sales, it will be possible to estimate the residual stock of a certain product and, therefore, define purchase orders efficiently. An accurate prediction of remaining stocks allows an efficient trade order policy which results in a significant reduction in out-of-stock events. The experimental results confirmed the effectiveness of the proposed approach with an accuracy (in the prediction of the remaining stock of such products) greater than 90%.

]]>Computation doi: 10.3390/computation11030061

Authors: Yincong Ma Kit Guan Lim Min Keng Tan Helen Sin Ee Chuo Ali Farzamnia Kenneth Tze Kin Teo

There is no doubt that the autonomous vehicle is an important developing direction of the auto industry, and, thus, more and more scholars are paying attention to doing more research in this field. Since path planning plays a key role in the operation of autonomous vehicles, scholars attach great importance to this field. Although it has been applied in many fields, there are still some problems, such as low efficiency of path planning and collision risk during driving. In order to solve these problems, an automotive vehicle-based rapid exploration random tree (AV-RRT)-based non-particle path planning method for autonomous vehicles is proposed. On the premise of ensuring safety and meeting the requirements of the vehicle&rsquo;s kinematic constraints through the expansion of obstacles, the dynamic step size is used for random tree growth. A non-particle collision detection (NPCD) collision detection algorithm and path modification (PM) path modification strategy are proposed for the collision risk in the turning process, and geometric constraints are used to represent the possible security threats, so as to improve the efficiency and safety of vehicle global path driving and to provide reference for the research of driverless vehicles.

]]>Computation doi: 10.3390/computation11030060

Authors: Oleksii Tretiak Dmitriy Kritskiy Igor Kobzar Mariia Arefieva Volodymyr Selevko Dmytro Brega Kateryna Maiorova Iryna Tretiak

In this article, the main causes of vibration in the thrust bearing of hydrogenerator motors rated 320 MW are considered. The main types of internal and surface defects that appear on the working surface of the thrust bearing disc during long-term operation are considered. A method of three-dimensional modeling of such defects is presented, and an assessment of the stress-strain state of the heel disc is proposed, taking into account the main forces acting on the working surface using the finite element method. An analysis of the possible further operation of discs with similar defects, in accordance with the technical requirements, is carried out, and we consider ways to eliminate them.

]]>Computation doi: 10.3390/computation11030059

Authors: Shubhangi A. Joshi Anupkumar M. Bongale P. Olof Olsson Siddhaling Urolagin Deepak Dharrao Arunkumar Bongale

Early detection and timely breast cancer treatment improve survival rates and patients&rsquo; quality of life. Hence, many computer-assisted techniques based on artificial intelligence are being introduced into the traditional diagnostic workflow. This inclusion of automatic diagnostic systems speeds up diagnosis and helps medical professionals by relieving their work pressure. This study proposes a breast cancer detection framework based on a deep convolutional neural network. To mine useful information about breast cancer through breast histopathology images of the 40&times; magnification factor that are publicly available, the BreakHis dataset and IDC(Invasive ductal carcinoma) dataset are used. Pre-trained convolutional neural network (CNN) models EfficientNetB0, ResNet50, and Xception are tested for this study. The top layers of these architectures are replaced by custom layers to make the whole architecture specific to the breast cancer detection task. It is seen that the customized Xception model outperformed other frameworks. It gave an accuracy of 93.33% for the 40&times; zoom images of the BreakHis dataset. The networks are trained using 70% data consisting of BreakHis 40&times; histopathological images as training data and validated on 30% of the total 40&times; images as unseen testing and validation data. The histopathology image set is augmented by performing various image transforms. Dropout and batch normalization are used as regularization techniques. Further, the proposed model with enhanced pre-trained Xception CNN is fine-tuned and tested on a part of the IDC dataset. For the IDC dataset training, validation, and testing percentages are kept as 60%, 20%, and 20%, respectively. It obtained an accuracy of 88.08% for the IDC dataset for recognizing invasive ductal carcinoma from H&amp;E-stained histopathological tissue samples of breast tissues. Weights learned during training on the BreakHis dataset are kept the same while training the model on IDC dataset. Thus, this study enhances and customizes functionality of pre-trained model as per the task of classification on the BreakHis and IDC datasets. This study also tries to apply the transfer learning approach for the designed model to another similar classification task.

]]>Computation doi: 10.3390/computation11030058

Authors: Andrii Humennyi Liliia Buival Zeyan Zheng

This article was written to investigate the research on the scientific directions for flying cars at the preliminary design stage to provide a rationale for the choice of scientific research in the area of flying cars. At present, the population of the Earth is gradually increasing, and traffic congestion will become a common phenomenon in cities in the future. This work used the methods of theoretical and statistical analysis to form an overall picture of this area of research. We researched the statistical data analysis conducted by scientists who dealt with flying cars and the associated issues. We gave a rationale for the choice of the object of scientific research, which is flying cars. People can read this information to have a starting point in their understanding of flying car design. This analysis of famous scientific works provides possible scientific directions that the research can take with respect to designing a flying car that combines the advantages of an airplane and a car and can take off and land on a normal highway for a short distance, as well as help people reach their destination quickly and easily.

]]>Computation doi: 10.3390/computation11030057

Authors: Karthick Sampath Subburayan Veerasamy Ravi P. Agarwal

In this article, we consider a delayed system of first-order hyperbolic differential equations. The presence of the delay term in first-order hyperbolic delay differential equations poses significant challenges in both analysis and numerical solutions. The delay term also makes it more difficult to use standard numerical methods for solving differential equations, as these methods often require that the differential equation be evaluated at the current time step. To overcome these challenges, specialized numerical methods and analytical techniques have been developed for solving first-order hyperbolic delay differential equations. We investigated and presented analytical results, such as the maximum principle and stability results. The propagation of discontinuities in the solution was also discussed, providing a framework for understanding its behavior. We presented a fractional-step method using a backward finite difference scheme and showed that the scheme is almost first-order convergent in space and time through the derivation of the error estimate. Additionally, we demonstrated an application of the proposed method to the problem of variable delay differential equations. We demonstrated the practical application of the proposed method to solving variable delay differential equations. The proposed algorithm is based on a numerical approximation method that utilizes a finite difference scheme to discretize the differential equation. We validated our theoretical results through numerical experiments.

]]>Computation doi: 10.3390/computation11030056

Authors: Dinar Ajeng Kristiyanti Imas Sukaesih Sitanggang Annisa Annisa Sri Nurdiati

(1) Background: Feature selection is the biggest challenge in feature-rich sentiment analysis to select the best (relevant) feature set, offer information about the relationships between features (informative), and be noise-free from high-dimensional datasets to improve classifier performance. This study aims to propose a binary version of a metaheuristic optimization algorithm based on Swarm Intelligence, namely the Salp Swarm Algorithm (SSA), as feature selection in sentiment analysis. (2) Methods: Significant feature subsets were selected using the SSA. Transfer functions with various types of the form S-TF, V-TF, X-TF, U-TF, Z-TF, and the new type V-TF with a simpler mathematical formula are used as a binary version approach to enable search agents to move in the search space. The stages of the study include data pre-processing, feature selection using SSA-TF and other conventional feature selection methods, modelling using K-Nearest Neighbor (KNN), Support Vector Machine, and Na&iuml;ve Bayes, and model evaluation. (3) Results: The results showed an increase of 31.55% to the best accuracy of 80.95% for the KNN model using SSA-based New V-TF. (4) Conclusions: We have found that SSA-New V3-TF is a feature selection method with the highest accuracy and less runtime compared to other algorithms in sentiment analysis.

]]>Computation doi: 10.3390/computation11030055

Authors: Alexander N. Pchelintsev Andrey M. Solovyov Mikhail E. Semenov Nikolay I. Selvesyuk Vladislav V. Kosyanchuck Evgeniy Yu. Zybin

The article suggests design principles of an advanced onboard computer network with an intelligent control system. It describes the main advantages of designing an onboard computer network based on fibre optics, which allows the implementation of an integrated intellectual system performing intelligent inference in emergency situations. The suggested principles significantly increase the reliability and fault tolerance of avionics suits, which, in turn, enhances flight safety. The suggested concept aims to solve a number of important problems including the design of a switchless computing environment, the development of the methods for dynamic reconfiguration of avionics suits with such an environment, and the implementation of a specialised multilevel intelligent avionics system within this environment.

]]>Computation doi: 10.3390/computation11030054

Authors: Ahmed M. Elaiw Abdualaziz K. Aljahdali Aatef D. Hobiny

This article formulates and analyzes a discrete-time Human immunodeficiency virus type 1 (HIV-1) and human T-lymphotropic virus type I (HTLV-I) coinfection model with latent reservoirs. We consider that the HTLV-I infect the CD4+T cells, while HIV-1 has two classes of target cells&mdash;CD4+T cells and macrophages. The discrete-time model is obtained by discretizing the original continuous-time by the non-standard finite difference (NSFD) approach. We establish that NSFD maintains the positivity and boundedness of the model&rsquo;s solutions. We derived four threshold parameters that determine the existence and stability of the four equilibria of the model. The Lyapunov method is used to examine the global stability of all equilibria. The analytical findings are supported via numerical simulation. The impact of latent reservoirs on the HIV-1 and HTLV-I co-dynamics is discussed. We show that incorporating the latent reservoirs into the HIV-1 and HTLV-I coinfection model will reduce the basic HIV-1 single-infection and HTLV-I single-infection reproductive numbers. We establish that neglecting the latent reservoirs will lead to overestimation of the required HIV-1 antiviral drugs. Moreover, we show that lengthening of the latent phase can suppress the progression of viral coinfection. This may draw the attention of scientists and pharmaceutical companies to create new treatments that prolong the latency period.

]]>Computation doi: 10.3390/computation11030053

Authors: Natalia Sorokova Miroslav Variny Yevhen Pysmennyy Yuliia Kol’chik

Milled peat must be dried for the production of peat fuel briquettes. The current trend in the creation of drying technologies is the intensification of the dehydration process while obtaining a high-quality final product. An increase in the temperature of the drying agent, above 300 &deg;C, significantly accelerates the reaching of the final moisture content of the peat. In the final stage, it is also accompanied by partial thermal decomposition of the solid phase. Its first stage, which is the decomposition of hemicellulose, contributes to a decrease in weight and an increase in the caloric content of the dry residue. The development of high-temperature drying modes consists of determining the temperature and velocity of the drying agent, wherein the duration of the material reaching the equilibrium moisture content will be minimal and the temperature of the material will not rise above the second-stage decomposition temperature of cellulose. This problem can be solved by the mathematical modeling of the dynamics of peat particles drying in the flow. The article presents a mathematical model of heat and mass transfer, phase transitions, and shrinkage during the dehydration of milled peat particles. The equations of the mathematical model were built based on the differential equation of mass transfer in open deformable systems, which, in the absence of deformations, turns into the known equation of state. A numerical method for implementing a mathematical model has been developed. The adequacy of the mathematical model is confirmed by comparing the results of numerical modeling with known experimental data.

]]>Computation doi: 10.3390/computation11030052

Authors: Mohammad Mustafa Taye

Convolutional neural networks (CNNs) are one of the main types of neural networks used for image recognition and classification. CNNs have several uses, some of which are object recognition, image processing, computer vision, and face recognition. Input for convolutional neural networks is provided through images. Convolutional neural networks are used to automatically learn a hierarchy of features that can then be utilized for classification, as opposed to manually creating features. In achieving this, a hierarchy of feature maps is constructed by iteratively convolving the input image with learned filters. Because of the hierarchical method, higher layers can learn more intricate features that are also distortion and translation invariant. The main goals of this study are to help academics understand where there are research gaps and to talk in-depth about CNN&rsquo;s building blocks, their roles, and other vital issues.

]]>Computation doi: 10.3390/computation11030051

Authors: Alexi Delgado Ruth Condori Miluska Hernández Enrique Lee Huamani Laberiano Andrade-Arenas

Industrial hygiene is a preventive technique that tries to avoid professional illnesses and damage to health caused by several possible toxic agents. The purpose of this study is to simultaneously analyze different risk factors (body vibration, lighting, heat stress and noise), to obtain an overall risk assessment of these factors and to classify them on a scale of levels of Unacceptable, Not recommended or Acceptable. In this work, an artificial intelligence model based on the grey clustering method was applied to evaluate the quality of industrial hygiene. The grey clustering method was selected, as it enables the integration of objective factors related to hazards present in the workplace with subjective employee evaluations. A case study, in the three warehouses of a beer industry in Peru, was developed. The results obtained showed that the warehouses have an acceptable level of quality. These results could help industries to make decisions about conducting evaluations of the different occupational agents and determine whether the quality of hygiene represents a risk, as well as give certain recommendations with respect to the factors presented.

]]>Computation doi: 10.3390/computation11030050

Authors: Karima Tamsaouete Baha Alzalg

In rotated quadratic cone programming problems, we minimize a linear objective function over the intersection of an affine linear manifold with the Cartesian product of rotated quadratic cones. In this paper, we introduce the rotated quadratic cone programming problems as a &ldquo;self-made&rdquo; class of optimization problems. Based on our own Euclidean Jordan algebra, we present a glimpse of the duality theory associated with these problems and develop a special-purpose primal&ndash;dual interior-point algorithm for solving them. The efficiency of the proposed algorithm is shown by providing some numerical examples.

]]>Computation doi: 10.3390/computation11030049

Authors: Samundra Regmi Ioannis K. Argyros Stepan Shakhno Halyna Yarmola

A local and semi-local convergence is developed of a class of iterative methods without derivatives for solving nonlinear Banach space valued operator equations under the classical Lipschitz conditions for first-order divided differences. Special cases of this method are well-known iterative algorithms, in particular, the Secant, Kurchatov, and Steffensen methods as well as the Newton method. For the semi-local convergence analysis, we use a technique of recurrent functions and majorizing scalar sequences. First, the convergence of the scalar sequence is proved and its limit is determined. It is then shown that the sequence obtained by the proposed method is bounded by this scalar sequence. In the local convergence analysis, a computable radius of convergence is determined. Finally, the results of the numerical experiments are given that confirm obtained theoretical estimates.

]]>Computation doi: 10.3390/computation11030048

Authors: Parisa Bouzari Balázs Gyenge Pejman Ebrahimi Mária Fekete-Farkas

In order to achieve a specific result, a firm&rsquo;s problem-solving activities can be thought of as a process that combines physical and cognitive actions. Its internal organization determines how information inputs are distributed among different task units and, as a result, how the cognitive workload is distributed. We tested a case study related to Iranian small and medium enterprises (SMEs). We used NCA analysis as a creative and state-of-the-art method with the help of R software to evaluate data. According to the findings, six prerequisites must be met in order to achieve a 50% level of efficient performance: innovation at a minimum of 22.7%, CSR at a minimum of 30.4%, IT investment at a minimum of 56.7%, SMM at a minimum of 38.3%, product differentiation at a minimum of 11.7%, and CRM at a minimum of 38.3%.

]]>Computation doi: 10.3390/computation11030047

Authors: Yubin Lee David Turcic Dan Danks Chien Wern

Grinding is widely used as the last step of the manufacturing process when a good surface finish and precise dimensional tolerances are required. However, if the grinding wheels have cracks, they may lead to a hazardous working environment and produce poor tolerance in machined products. Therefore, grinding wheels should be inspected for cracks before being mounted onto the machine. In this study, a novel method of finding possible internal cracks in the aluminium oxide grinding wheel will be explored by examining the natural frequency and displacement of wheels using an impact hammer testing method. Grinding wheels were cracked into two segments using a three-point bend fixture and then bonded intentionally to simulate cracks. The impact hammer test indicated that cracks in the grinding wheels caused a drop in natural vibration frequency and an increase in the maximum displacement of the accelerometer sensors.

]]>Computation doi: 10.3390/computation11030046

Authors: José Antonio Valencia Johans Restrepo Hernán David Salinas Elisabeth Restrepo

A methodology is implemented to deform the surface of a magnetorheological elastomer (MRE) exposed to an external magnetic field by means of data matrix manipulation of the surface. The elastomer surface is created randomly using the Garcia and Stoll method to realize a nonuniform morphology similar to that found in real MREs. Deformations are induced by means of the translations of the magnetic particles inside the elastomer, under the influence of a uniform magnetic field, generating changes in the surface roughness. Our model computes these deformations using a three-dimensional Gaussian function bounded at 2 standard deviations from its mean value, taking as the standard deviation value the radius of the particle that causes the deformation. To find the regions deformed by the particles, we created a methodology based on the consultation, creation and modification of a system of matrices that control each point of the random surface created. This methodology allows us to work with external files of initial and subsequent positions of each particle inside the elastomer, and allows us to manipulate and analyze the results in a smoother and faster way. Results were found to be satisfactory and consistent when calculating the percentage of surface deformation of real systems.

]]>Computation doi: 10.3390/computation11030045

Authors: Michele Bonnin Kailing Song Fabio L. Traversa Fabrizio Bonani

This paper reviews advanced modeling and analysis techniques useful in the description, design, and optimization of mechanical energy harvesting systems based on the collection of energy from vibration sources. The added value of the present contribution is to demonstrate the benefits of the exploitation of advanced techniques, most often inherited from other fields of physics and engineering, to improve the performance of such systems. The review is focused on the modeling techniques that apply to the entire energy source/mechanical oscillator/transducer/electrical load chain, describing mechanical&ndash;electrical analogies to represent the collective behavior as the cascade of equivalent electrical two-ports, introducing matching networks enhancing the energy transfer to the load, and discussing the main numerical techniques in the frequency and time domains that can be used to analyze linear and nonlinear harvesters, both in the case of deterministic and stochastic excitations.

]]>Computation doi: 10.3390/computation11030044

Authors: Thobeka Nombebe James Allison Leonard Santana Jaco Visagie

In this paper, we investigate the performance of a variety of frequentist estimation techniques for the scale and shape parameters of the Lomax distribution. These methods include traditional methods such as the maximum likelihood estimator and the method of moments estimator. A version of the maximum likelihood estimator adjusted for bias is included as well. Furthermore, an alternative moment-based estimation technique, the L-moment estimator, is included, along with three different minimum distance estimators. The finite sample performances of each of these estimators are compared in an extensive Monte Carlo study. We find that no single estimator outperforms its competitors uniformly. We recommend one of the minimum distance estimators for use with smaller samples, while a bias-reduced version of maximum likelihood estimation is recommended for use with larger samples. In addition, the desirable asymptotic properties of traditional maximum likelihood estimators make them appealing for larger samples. We include a practical application demonstrating the use of the described techniques on observed data.

]]>Computation doi: 10.3390/computation11030043

Authors: Agustín Moreno Cañadas Pedro Fernando Fernández Espinosa Adolfo Ballester-Bolinches

The Kronecker algebra K is the path algebra induced by the quiver with two parallel arrows, one source and one sink (i.e., a quiver with two vertices and two arrows going in the same direction). Modules over K are said to be Kronecker modules. The classification of these modules can be obtained by solving a well-known tame matrix problem. Such a classification deals with solving systems of differential equations of the form Ax=Bx&prime;, where A and B are m&times;n, F-matrices with F an algebraically closed field. On the other hand, researching the Yang&ndash;Baxter equation (YBE) is a topic of great interest in several science fields. It has allowed advances in physics, knot theory, quantum computing, cryptography, quantum groups, non-associative algebras, Hopf algebras, etc. It is worth noting that giving a complete classification of the YBE solutions is still an open problem. This paper proves that some indecomposable modules over K called pre-injective Kronecker modules give rise to some algebraic structures called skew braces which allow the solutions of the YBE. Since preprojective Kronecker modules categorize some integer sequences via some appropriated snake graphs, we prove that such modules are automatic and that they induce the automatic sequences of continued fractions.

]]>Computation doi: 10.3390/computation11020042

Authors: Carlos Andres Ramos-Paja Juan David Bastidas-Rodriguez Andres Julian Saavedra-Montes

Low-voltage photovoltaic systems are being widely used around the world, including their introduction into the power grid. The development of these systems requires the adaptation of several power converters, their static and dynamic modeling, the design of passive elements, and the design of the controller parameters, among other actions. Today, power converters are key elements in the development of photovoltaic systems, and classical power converters such as buck converters produce discontinuous input and output currents, requiring a high input capacitance and impacting the output power quality of these systems. This paper presents a proposal for a low-voltage photovoltaic system that uses a continuous input/output current buck converter, which enhances the operation of the classical buck converter in photovoltaic systems. The methodology describes the proposed photovoltaic system, including the power converter, its detailed operation, and the analysis of its waveforms. Moreover, the methodology includes a mathematical model of the photovoltaic system&rsquo;s dynamic behavior and the design of a sliding-mode controller for maximum power extraction and perturbation rejection. The photovoltaic system is validated in two ways: first, a comparison with the classical buck converter highlighting the advantages of continuous input/output currents is presented; then, an application example using commercial devices is described in detail. The application example uses a flowchart to design the power converter and the sliding-mode controller, and a circuit simulation confirms the advantages of the continuous input/output current buck converter with its controller. In the circuit simulation, the control strategy is formed by a perturb and observe algorithm that generates the voltage reference for the sliding-mode controller, which guarantees the system stability, tracks the maximum power point, and rejects the double-frequency oscillations generated by an intended microinverter.

]]>Computation doi: 10.3390/computation11020041

Authors: Iñaki Garmendia Haritz Vallejo Usue Osés

Composite moulds constitute an attractive alternative to classical metallic moulds when used for components fabricated by processes such as Resin Transfer Moulding (RTM). However, there are many factors that have to be accounted for if a correct design of the moulds is sought after. In this paper, the Finite Element Method (FEM) is used to help in the design of the mould. To do so, a thermo-electrical simulation has been performed through MSC-Marc in the preheating phase in order to ensure that the mould is able to be heated, through the Joule&rsquo;s effect, according to the thermal cycle specified under operating conditions. Mean temperatures of 120 &deg;C and 100 &deg;C are predicted for the lower and upper semi-mould parts, respectively. Additionally, a thermo-electrical-mechanical calculation has been completed with MSC-Marc to calculate the tensile state along the system during the preheating stage. For the filling phase, the filling process itself has been simulated through RTM-Worx. Both the uniform- and non-uniform temperature distribution approaches have been used to assess the resulting effect. It has been found that this piece of software cannot model the temperature dependency of the resin and a numerical trick must have been applied in the second case to overcome it. Results have been found to be very dependent on the approach, the filling time being 73% greater when modelling a non-uniform temperature distribution. The correct behaviour of the mould during the filling stage, as a consequence of the filling pressure, has been also proved with a specific mechanical analysis conducted with MSC-Marc. Finally, the thermo-elastic response of the mould during the curing stage has been numerically assessed. This analysis has been made through MSC-Marc, paying special attention to the curing of the resin and the exothermic reaction that takes place. For the sake of accuracy, a user subroutine to include specific curing laws has been used. Material properties employed are also described in detail following a modified version of the Scott model, with curing properties extracted from experiments. All these detailed calculations have been the cornerstone to designing the composite mould and have also unveiled some capabilities that were missed in the commercial codes employed. Future versions of these commercial codes will have to deal with these weak points but, as a whole, the Finite Element Method is shown to be an appropriate tool for helping in the design of composite moulds.

]]>Computation doi: 10.3390/computation11020040

Authors: Allé Dioum Yacouba I. Diakité Yuiry Malozovsky Blaise Awola Ayirizia Aboubaker Chedikh Beye Diola Bagayoko

We present results from ab initio, self-consistent calculations of electronic, transport, and bulk properties of cubic magnesium silicide (Mg2Si). We employed a local density approximation (LDA) potential to perform the computation, following the Bagayoko, Zhao, and Williams (BZW) method, as improved by Ekuma and Franklin (BZW-EF). The BZW-EF method guarantees the attainment of the ground state as well as the avoidance of over-complete basis sets. The ground state electronic energies, total and partial densities of states, effective masses, and the bulk modulus are investigated. As per the calculated band structures, cubic Mg2Si has an indirect band gap of 0.896 eV, from &Gamma; to X, for the room temperature experimental lattice constant of 6.338 &Aring;. This is in reasonable agreement with the experimental value of 0.8 eV, unlike previous ab initio DFT results of 0.5 eV or less. The predicted zero temperature band gap of 0.965 eV, from &Gamma; to X, is obtained for the computationally determined equilibrium lattice constant of 6.218 &Aring;. The calculated value of the bulk modulus of Mg2Si is 58.58 GPa, in excellent agreement with the experimental value of 57.03 &plusmn; 2 GPa.

]]>Computation doi: 10.3390/computation11020039

Authors: Dingding Cao MieowKee Chan SokChoo Ng

Rapid industrialization and population growth cause severe water pollution and increased water demand. The use of FeCu nanoparticles (nanoFeCu) in treating sewage has been proven to be a space-efficient method. The objective of this work is to develop a recurrent neural network (RNN) model to estimate the performance of immobilized nanoFeCu in sewage treatment, thereby easing the monitoring and forecasting of sewage quality. In this work, sewage data was collected from a local sewage treatment plant. pH, nitrate, nitrite, and ammonia were used as the inputs. One-to-one and three-to-three RNN architectures were developed, optimized, and analyzed. The result showed that the one-to-one model predicted all four inputs with good accuracy, where R2 was found within a range of 0.87 to 0.98. However, the stability of the one-to-one model was not as good as the three-to-three model, as the inputs were chemically and statistically correlated in the later model. The best three-to-three model was developed by a single layer with 10 neurons and an average R2 of 0.91. In conclusion, this research provides data support for designing the neural network prediction model for sewage and provides positive significance for the exploration of smart sewage treatment plants.

]]>Computation doi: 10.3390/computation11020038

Authors: Shanjida Chowdhury Mahfujur Rahman Indrajit Ajit Doddanavar Nurul Mohammad Zayed Vitalii Nitsenko Olena Melnykovych Oksana Holik

This study aimed to examine the role and impact of social media on the knowledge of the COVID-19 pandemic in Bangladesh through disseminating actual changes in health safety, trust and belief of social media&rsquo;s coverage statistics, isolation, and psychological numbness among students. This study used a cross-sectional design in which a quantitative approach was adopted. Data from an online survey were collected in a short period of time during the early stages of COVID-19 to determine the relationship between social media activity and knowledge of the COVID-19 pandemic with accuracy. A total of 189 respondents were interviewed using structured questionnaires during the onset of the COVID-19 outbreak in Bangladeshi university students. Exploratory factor analysis (EFA) and path analysis were performed. Out of 189 respondents, about 80% were aged between 16 and 25 years, of which nearly 60.33% were students. This study explored four factors&mdash;knowledge and health safety, trust in social media news, social distancing or quarantine, and psychological effect&mdash;using factor analysis. These four factors are also found to be positively associated in path analysis. Validation of the model was assessed, revealing that the path diagram with four latent exogenous variables fit well. Each factor coefficient was treated as a factor loading (&beta; = 0.564 to 0.973). The results suggested that the measurement models using four elements were appropriate. The coefficient of determination was 0.98, indicating that the model provided an adequate explanation. Social media is transforming the dynamics of health issues, providing information and warnings about the adverse effects of COVID-19, having a positive impact on lockdown or quarantine, and promoting psychological wellness. This comprehensive study suggested that social media plays a positive role in enhancing knowledge about COVID-19 and other pandemic circumstances.

]]>Computation doi: 10.3390/computation11020037

Authors: Diah Chaerani Shenya Saksmilena Athaya Zahrani Irmansyah Elis Hertini Endang Rusyaman Erick Paulus

In this paper, the implementation of the Benders decomposition method to solve the Adjustable Robust Counterpart for Internet Shopping Online Problem (ARC-ISOP) is discussed. Since the ARC-ISOP is a mixed-integer linear programming (MILP) model, the discussion begins by identifying the linear variables in the form of continuous variables and nonlinear variables in the form of integer variables. In terms of Benders decomposition, the ARC-ISOP model can be solved by partitioning them into smaller subproblems (the master problem and inner problem), which makes it easier for computational calculations. Pseudo-codes in Python programming language are presented in this paper. An example case is presented for the ARC-ISOP to determine the optimal total cost (including product price and shipping cost) and delivery time. Numerical simulations were carried out using Python programming language with case studies in the form of five products purchased from six shops.

]]>Computation doi: 10.3390/computation11020036

Authors: Gilberto González-Parra Abraham J. Arenas

Over the course of the COVID-19 pandemic millions of deaths and hospitalizations have been reported. Different SARS-CoV-2 variants of concern have been recognized during this pandemic and some of these variants of concern have caused uncertainty and changes in the dynamics. The Omicron variant has caused a large amount of infected cases in the US and worldwide. The average number of deaths during the Omicron wave toll increased in comparison with previous SARS-CoV-2 waves. We studied the Omicron wave by using a highly nonlinear mathematical model for the COVID-19 pandemic. The novel model includes individuals who are vaccinated and asymptomatic, which influences the dynamics of SARS-CoV-2. Moreover, the model considers the waning of the immunity and efficacy of the vaccine against the Omicron strain. This study uses the facts that the Omicron strain has a higher transmissibility than the previous circulating SARS-CoV-2 strain but is less deadly. Preliminary studies have found that Omicron has a lower case fatality rate compared to previous circulating SARS-CoV-2 strains. The simulation results show that even if the Omicron strain is less deadly it might cause more deaths, hospitalizations and infections. We provide a variety of scenarios that help to obtain insight about the Omicron wave and its consequences. The proposed mathematical model, in conjunction with the simulations, provides an explanation for a large Omicron wave under various conditions related to vaccines and transmissibility. These results provide an awareness that new SARS-CoV-2 variants can cause more deaths even if their fatality rate is lower.

]]>Computation doi: 10.3390/computation11020035

Authors: Volodymyr Pavlikov Eduard Tserne Oleksii Odokiienko Nataliia Sydorenko Maksym Peretiatko Olha Kosolapova Ihor Prokofiiev Andrii Humennyi Konstantin Belousov

We developed a signal processing algorithm to determine three components of the velocity vector of a highly maneuverable aircraft. We developed an equation of the distance from an aircraft to an underlying surface. This equation describes a general case of random spatial aircraft positions. Particularly, this equation considers distance changes according to an aircraft flight velocity variation. We also determined the relationship between radial velocity measured within the radiation pattern beam, the signal frequency Doppler shift, and the law of the range changing within the irradiated surface area. The models of the emitted and received signals were substantiated. The proposed equation of the received signal assumes that a reflection occurs not from a point object, but from a spatial area of an underlying surface. It fully corresponds to the real interaction process between an electromagnetic field and surface. The considered solution allowed us to synthesize the optimal algorithm to estimate the current range and three components {Vx,Vy,Vz} of the aircraft&rsquo;s velocity vector V&rarr;. In accordance with the synthesized algorithm, we propose a radar structural diagram. The developed radar structural diagram consists of three channels for transmitting and receiving signals. This number of channels is necessary to estimate the full set of the velocity and altitude vector components. We studied several aircraft flight trajectories via simulations. We analyzed straight-line uniform flights; flights with changes in yaw, roll, and attack angles; vertical rises; and landings on a glide path and lining up with the correct yaw, pitch, and roll angles. The simulation results confirmed the correctness of the obtained solution.

]]>Computation doi: 10.3390/computation11020034

Authors: Sneha Kumari Akhilesh Kumar Pathak Rahul Kumar Gangwar Sumanta Gupta

In this study, we demonstrate the influence of operating temperature variation and stress-induced effects on a silicon-on-insulator (SOI)-based multi-mode interference coupler (MMI). Here, SiGe is introduced as the cladding layer to analyze its effect on the optical performance of the MMI coupler. SiGe cladding thickness is varied from 5 nm to 40 nm. Characterization of the MMI coupler for ridge waveguides with both rectangular and trapezoidal sidewall slope angle cross-sections is reviewed in terms of power splitting ratio and birefringence. Stress-induced birefringence as a function of operating temperature and cladding thickness for fundamental mode have been calculated. A trapezoidal waveguide with 40 nm of cladding thickness induces more stress and, therefore, affects birefringence more than a rectangular waveguide of any thickness. Simulation results using the finite element method (FEM) confirmed that operating temperature variation, upper cladding thickness, and its stress effect are significant parameters that drastically modify the performance of an MMI coupler.

]]>Computation doi: 10.3390/computation11020033

Authors: Binbin Li Zhefan Ye Jue Li Siyuan Shao Chenlu Wang

To reduce traffic congestion and pollution, urban rail transit in China has been in a stage of rapid development in recent years. As a result, rail transit service interruption events are becoming more common, seriously affecting the resilience of the transportation system and user satisfaction. Therefore, determining the changing mechanism of the passenger waiting tolerance, which helps establish a scientific and effective emergency plan, is urgent. First, the variables and levels of the urban rail service interruption scenarios were screened and determined, and the stated preference questionnaire was designed using the orthogonal design method. Further, the data of the waiting tolerance of passengers during service interruptions were obtained through questionnaires. Second, combined with the questionnaire data, an accelerated failure time model that obeys the exponential distribution was constructed. The results indicate that factors such as the service interruption duration, travel distance, bus bridging, information accuracy, attention to operation information, travel frequency and interruption experience affect the waiting tolerance of passengers during service interruptions. Finally, combined with the sensitivity analysis of the key influencing factors, the policy analysis and suggestions are summarized to provide theoretical support for the urban rail operation and management department to capture the passenger waiting tolerance accurately during service interruptions and formulate an efficient, high-quality emergency organization plan.

]]>Computation doi: 10.3390/computation11020032

Authors: Zhe Wang Hao Xu Pan Zhou Gang Xiao

Multilabel data share important features, including label imbalance, which has a significant influence on the performance of classifiers. Because of this problem, a widely used multilabel classification algorithm, the multilabel k-nearest neighbor (ML-kNN) algorithm, has poor performance on imbalanced multilabel data. To address this problem, this study proposes an improved ML-kNN algorithm based on value and weight. In this improved algorithm, labels are divided into minority and majority, and different strategies are adopted for different labels. By considering the label of latent information carried by the nearest neighbors, a value calculation method is proposed and used to directly classify majority labels. Additionally, to address the misclassification problem caused by a lack of nearest neighbor information for minority labels, weight calculation is proposed. The proposed weight calculation converts distance information with and without label sets in the nearest neighbors into weights. The experimental results on multilabel datasets from different benchmarks demonstrate the performance of the algorithm, especially for datasets with high imbalance. Different evaluation metrics show that the results are improved by approximately 2&ndash;10%. The verified algorithm could be applied to a multilabel classification of various fields involving label imbalance, such as drug molecule identification, building identification, and text categorization.

]]>Computation doi: 10.3390/computation11020031

Authors: Sergei Abramovich

The paper is written to demonstrate the applicability of the notion of triangulation typically used in social sciences research to computationally enhance the mathematics education of future K-12 teachers. The paper starts with the so-called Brain Teaser used as background for (what is called in the paper) computational triangulation in the context of four digital tools. Computational problem solving and problem formulating are presented as two sides of the same coin. By revealing the hidden mathematics of Fibonacci numbers included in the Brain Teaser, the paper discusses the role of computational thinking in the use of the well-ordering principle, the generating function method, digital fabrication, difference equations, and continued fractions in the development of computational algorithms. These algorithms eventually lead to a generalized Golden Ratio in the form of a string of numbers independently generated by digital tools used in the paper.

]]>Computation doi: 10.3390/computation11020030

Authors: Pornnapat Yamphram Phiraphat Sutthimat Udomsak Rakwongwan

This paper studies the portfolio selection problem where tradable assets are a bank account, and standard put and call options are written on the S&amp;P 500 index in incomplete markets in which there exist bid&ndash;ask spreads and finite liquidity. The problem is mathematically formulated as an optimization problem where the variance of the portfolio is perceived as a risk. The task is to find the portfolio which has a satisfactory return but has the minimum variance. The underlying is modeled by a variance gamma process which can explain the extreme price movement of the asset. We also study how the optimized portfolio changes subject to a user&rsquo;s views of the future asset price. Moreover, the optimization model is extended for asset pricing and hedging. To illustrate the technique, we compute indifference prices for buying and selling six options namely a European call option, a quadratic option, a sine option, a butterfly spread option, a digital option, and a log option, and propose the hedging portfolios, which are the portfolios one needs to hold to minimize risk from selling or buying such options, for all the options. The sensitivity of the price from modeling parameters is also investigated. Our hedging strategies are decent with the symmetry property of the kernel density estimation of the portfolio payout. The payouts of the hedging portfolios are very close to those of the bought or sold options. The results shown in this study are just illustrations of the techniques. The approach can also be used for other derivatives products with known payoffs in other financial markets.

]]>Computation doi: 10.3390/computation11020029

Authors: Alessandra Carleo Roberto Rocci Maria Sole Staffa

The objective of the present paper is to propose a new method to measure the recovery performance of a portfolio of non-performing loans (NPLs) in terms of recovery rate and time to liquidate. The fundamental idea is to draw a curve representing the recovery rates over time, here assumed discretized, for example, in years. In this way, the user can get simultaneously information about recovery rate and time to liquidate of the portfolio. In particular, it is discussed how to estimate such a curve in the presence of right-censored data, e.g., when the NPLs composing the portfolio have been observed in different time periods, with a method based on an algorithm that is usually used in the construction of survival curves. The curves obtained are smoothed with nonparametric statistical learning techniques. The effectiveness of the proposal is shown by applying the method to simulated and real financial data. The latter are about some portfolios of Italian unsecured NPLs taken over by a specialized operator.

]]>Computation doi: 10.3390/computation11020028

Authors: Suparna Biswas Rituparna Sen

Range value at risk (RVaR) is a quantile-based risk measure with two parameters. As special examples, the value at risk (VaR) and the expected shortfall (ES), two well-known but competing regulatory risk measures, are both members of the RVaR family. The estimation of RVaR is a critical issue in the financial sector. Several nonparametric RVaR estimators are described here. We examine these estimators&rsquo; accuracy in various scenarios using Monte Carlo simulations. Our simulations shed light on how changing p and q with respect to n affects the effectiveness of RVaR estimators that are nonparametric, with n representing the total number of samples. Finally, we perform a backtesting exercise of RVaR based on Acerbi and Szekely&rsquo;s test.

]]>Computation doi: 10.3390/computation11020027

Authors: Farida Akiyanova Nurlan Ongdas Nurlybek Zinabdin Yergali Karakulov Adlet Nazhbiyev Zhanbota Mussagaliyeva Aksholpan Atalikhova

Flooding events have been negatively affecting the Republic of Kazakhstan, with higher occurrence in flat parts of the country during spring snowmelt in snow-fed rivers. The current project aims to assess the flood hazard reduction capacity of Alva irrigation system, which is located in the interfluve area of Yesil and Nura Rivers. The assessment is performed by simulating spring floods using HEC-RAS 2D and controlling the gates of the existing system. A digital elevation model of the study domain was generated by integration of Sentinel-1 radar images with the data obtained from bathymetrical survey and aerial photography. Comparison of the simulated inundation area with a remote sensing image of spring flood in April 2019 indicated that the main reason for differences was due to local snowmelt in the study domain. Exclusion of areas flooded by local snowmelt, which were identified using the updated DEM, from comparison increased the model similarity to 70%. Further simulations of different exceedance probability hydrographs enabled classification of the study area according to maximum flood depth and flood duration. Theoretical changes on the dam crest as well as additional gates were proposed to improve the system capacity by flooding agriculturally important areas, which were not flooded during the simulation of the current system. The developed model could be used by local authorities for further development of flood mitigation measures and assessment of different development plans of the irrigation system.

]]>Computation doi: 10.3390/computation11020026

Authors: Salma Abbas Mustapha Muhammad Farrukh Jamal Christophe Chesneau Isyaku Muhammad Mouna Bouchane

In this paper, we develop the new extended Kumaraswamy generated (NEKwG) family of distributions. It aims to improve the modeling capability of the standard Kumaraswamy family by using a one-parameter exponential-logarithmic transformation. Mathematical developments of the NEKwG family are provided, such as the probability density function series representation, moments, information measure, and order statistics, along with asymptotic distribution results. Two special distributions are highlighted and discussed, namely, the new extended Kumaraswamy uniform (NEKwU) and the new extended Kumaraswamy exponential (NEKwE) distributions. They differ in support, but both have the features to generate models that accommodate versatile skewed data and non-monotone failure rates. We employ maximum likelihood, least-squares estimation, and Bayes estimation methods for parameter estimation. The performance of these methods is discussed using simulation studies. Finally, two real data applications are used to show the flexibility and importance of the NEKwU and NEKwE models in practice.

]]>Computation doi: 10.3390/computation11020025

Authors: Alexandros Koulis Constantinos Kyriakopoulos

Several studies estimate the volatility spillover effects between gold and silver returns, but none of them used the implied volatility to evaluate the long-term relationship between these two metal markets. Our paper aims to fill this gap in the existing literature. This paper investigates the long-term volatility transmission between gold and silver; by using GARCH and VAR modelling, it finds that the volatility transmission from gold to silver is unidirectional. Volatility strategies using options can be designed to take advantage of this especially in times where the volatility transmission is not captured by the markets. Additionally, the results appear to be useful for gaining better portfolio diversification benefits. Investors, for instance, could use the results of this study for making proper investment decisions during the period of economic down-turns or inflation surges.

]]>Computation doi: 10.3390/computation11020024

Authors: David Liang Ziji Zhang Miriam Rafailovich Marcia Simon Yuefan Deng Peng Zhang

Coarse-grained (CG) modeling has defined a well-established approach to accessing greater space and time scales inaccessible to the computationally expensive all-atomic (AA) molecular dynamics (MD) simulations. Popular methods of CG follow a bottom-up architecture to match properties of fine-grained or experimental data whose development is a daunting challenge for requiring the derivation of a new set of parameters in potential calculation. We proposed a novel physics-informed machine learning (PIML) framework for a CG model and applied it, as a verification, for modeling the SARS-CoV-2 spike glycoprotein. The PIML in the proposed framework employs a force-matching scheme with which we determined the force-field parameters. Our PIML framework defines its trainable parameters as the CG force-field parameters and predicts the instantaneous forces on each CG bead, learning the force field parameters to best match the predicted forces with the reference forces. Using the learned interaction parameters, CGMD validation simulations reach the microsecond time scale with stability, at a simulation speed 40,000 times faster than the conventional AAMD. Compared with the traditional iterative approach, our framework matches the AA reference structure with better accuracy. The improved efficiency enhances the timeliness of research and development in producing long-term simulations of SARS-CoV-2 and opens avenues to help illuminate protein mechanisms and predict its environmental changes.

]]>Computation doi: 10.3390/computation11020023

Authors: Shubham Gupta Subhodip Chatterjee Ayush Malviya Gurpreet Singh Arnab Chanda

Slips and falls are among the most serious public safety hazards. Adequate friction at the shoe&ndash;floor contact is necessary to reduce these risks. In the presence of slippery fluids such as water or oil, the footwear outsole is crucial for ensuring appropriate shoe&ndash;floor traction. While the influence of flooring and contaminants on footwear traction has been extensively studied across several outsole surfaces, limited studies have investigated the science of outsole design and how it affects footwear traction performance. In this work, the tread channels of a commonly found outsole pattern, i.e., horizontally oriented treads, was varied parametrically across the widths (i.e., 2, 4, 6 mm) and gaps (i.e., 2, 3, 4 mm). Nine outsole designs were developed and their traction, fluid pressures, and fluid flow rates during slipping were estimated using a mechanical slip testing and a CFD-based computational framework. Outsoles which had wider tread (i.e., 6 mm) surfaces showed increased slip risks on wet flooring. Outsoles with large gaps (i.e., 4 mm) exhibited increased traction performance when slipped on wet flooring (R2 = 0.86). These novel results are anticipated to provide valuable insights into the science of footwear traction and provide important guidelines for the footwear manufacturers to optimize outsole surface design to reduce the risk of slips and falls. In addition to this, the presented CFD-based computational framework could help develop better outsole designs to further solve this problem.

]]>Computation doi: 10.3390/computation11020022

Authors: Zoia Duriagina Alexander Pankratov Tetyana Romanova Igor Litvinchev Julia Bennell Igor Lemishka Sergiy Maximov

To obtain high-quality and durable parts by 3D printing, specific characteristics (porosity and proportion of various sizes of particles) in the mixture used for printing or sintering must be assured. To predict these characteristics, a mathematical model of optimized packing polyhedral objects (particles of titanium alloys) in a cuboidal container is presented, and a solution algorithm is developed. Numerical experiments demonstrate that the results obtained by the algorithm are very close to experimental findings. This justifies using numerical simulation instead of expensive experimentation.

]]>Computation doi: 10.3390/computation11020021

Authors: Meglena Lazarova Fatima Sapundzhi

Fast-growing technology and the development of IT services have yielded the idea of founding a new application of stochastic processes and their properties. We give a new connection between electronic process management and a relatively new stochastic process named the Non-central Polya-Aeppli process. This process is applied as a counting process in the mathematical construction of the given model, and it has been introduced as a counting process in electronic process management.

]]>Computation doi: 10.3390/computation11020020

Authors: Intan Nurma Yulita Naufal Ariful Amri Akik Hidayat

In Indonesia, tomato is one of the horticultural products with the highest economic value. To maintain enhanced tomato plant production, it is necessary to monitor the growth of tomato plants, particularly the leaves. The quality and quantity of tomato plant production can be preserved with the aid of computer technology. It can identify diseases in tomato plant leaves. An algorithm for deep learning with a DenseNet architecture was implemented in this study. Multiple hyperparameter tests were conducted to determine the optimal model. Using two hidden layers, a DenseNet trainable layer on dense block 5, and a dropout rate of 0.4, the optimal model was constructed. The 10-fold cross-validation evaluation of the model yielded an accuracy value of 95.7 percent and an F1-score of 95.4 percent. To recognize tomato plant leaves, the model with the best assessment results was implemented in a mobile application.

]]>Computation doi: 10.3390/computation11020019

Authors: Yurii Skob Sergiy Yakovlev Kyryl Korobchynskyi Mykola Kalinichenko

This study aims to reconstruct hazardous zones after the hydrogen explosion at a fueling station and to assess an influence of terrain landscape on harmful consequences for personnel with the use of numerical methods. These consequences are measured by fields of conditional probability of lethal and ear-drum injuries for people exposed to explosion waves. An &ldquo;Explosion Safety&reg;&rdquo; numerical tool is applied for non-stationary and three-dimensional reconstructions of the hazardous zone around the epicenter of the explosion of a premixed stoichiometric hemispheric hydrogen cloud. In order to define values of the explosion wave&rsquo;s damaging factors (maximum overpressure and impulse of pressure phase), a three-dimensional mathematical model of chemically active gas mixture dynamics is used. This allows for controlling the current pressure in every local point of actual space, taking into account the complex terrain. This information is used locally in every computational cell to evaluate the conditional probability of such consequences for human beings, such as ear-drum rupture and lethal outcome, on the basis of probit analysis. To evaluate the influence of the landscape profile on the non-stationary three-dimensional overpressure distribution above the Earth&rsquo;s surface near the epicenter of an accidental hydrogen explosion, a series of computational experiments with different variants of the terrain is carried out. Each variant differs in the level of mutual arrangement of the explosion epicenter and the places of possible location of personnel. The obtained results indicate that any change in working-place level of terrain related to the explosion&rsquo;s epicenter can better protect personnel from the explosion wave than evenly leveled terrain, and deepening of the explosion epicenter level related to working place level leads to better personnel protection than vice versa. Moreover, the presented coupled computational fluid dynamics and probit analysis model can be recommended to risk-managing experts as a cost-effective and time-saving instrument to assess the efficiency of protection structures during safety procedures.

]]>Computation doi: 10.3390/computation11020018

Authors: Essraa Gamal Mohamed Rebeca P. Díaz Redondo Abdelrahim Koura Mohamed Sherif EL-Mofty Mohammed Kayed

The significance of age estimation arises from its applications in various fields, such as forensics, criminal investigation, and illegal immigration. Due to the increased importance of age estimation, this area of study requires more investigation and development. Several methods for age estimation using biometrics traits, such as the face, teeth, bones, and voice. Among then, teeth are quite convenient since they are resistant and durable and are subject to several changes from childhood to birth that can be used to derive age. In this paper, we summarize the common biometrics traits for age estimation and how this information has been used in previous research studies for age estimation. We have paid special attention to traditional machine learning methods and deep learning approaches used for dental age estimation. Thus, we summarized the advances in convolutional neural network (CNN) models to estimate dental age from radiological images, such as 3D cone-beam computed tomography (CBCT), X-ray, and orthopantomography (OPG) to estimate dental age. Finally, we also point out the main innovations that would potentially increase the performance of age estimation systems.

]]>Computation doi: 10.3390/computation11020017

Authors: Computation Editorial Office Computation Editorial Office

High-quality academic publishing is built on rigorous peer review [...]

]]>Computation doi: 10.3390/computation11020016

Authors: Elias Dritsas Maria Trigka

Water is a valuable, necessary and unfortunately rare commodity in both developing and developed countries all over the world. It is undoubtedly the most important natural resource on the planet and constitutes an essential nutrient for human health. Geo-environmental pollution can be caused by many different types of waste, such as municipal solid, industrial, agricultural (e.g., pesticides and fertilisers), medical, etc., making the water unsuitable for use by any living being. Therefore, finding efficient methods to automate checking of water suitability is of great importance. In the context of this research work, we leveraged a supervised learning approach in order to design as accurate as possible predictive models from a labelled training dataset for the identification of water suitability, either for consumption or other uses. We assume a set of physiochemical and microbiological parameters as input features that help represent the water&rsquo;s status and determine its suitability class (namely safe or nonsafe). From a methodological perspective, the problem is treated as a binary classification task, and the machine learning models&rsquo; performance (such as Naive Bayes&ndash;NB, Logistic Regression&ndash;LR, k Nearest Neighbours&ndash;kNN, tree-based classifiers and ensemble techniques) is evaluated with and without the application of class balancing (i.e., use or nonuse of Synthetic Minority Oversampling Technique&ndash;SMOTE), comparing them in terms of Accuracy, Recall, Precision and Area Under the Curve (AUC). In our demonstration, results show that the Stacking classification model after SMOTE with 10-fold cross-validation outperforms the others with an Accuracy and Recall of 98.1%, Precision of 100% and an AUC equal to 99.9%. In conclusion, in this article, a framework is presented that can support the researchers&rsquo; efforts toward water quality prediction using machine learning (ML).

]]>Computation doi: 10.3390/computation11020015

Authors: Alexander G. Kolpakov Igor V. Andrianov Sergey I. Rakin

The paper is devoted to the problem of propagation of elastic waves in composites with initial stresses. We suppose initial stresses are well within the elastic regime. We deal with the long-wave case and use the asymptotic homogenization technique based on the two-scale asymptotic approach. The main problem lies in solving the local (cell) problem, i.e., boundary value problem on a periodically repeating fragment of a composite. In general, the local problem cannot be solved explicitly. In our work, it is obtained for any initial stresses formulas, which is convenient for solving by standard codes. An analytical solution is obtained for small initial stresses. Asymptotic expansions used a small parameter characterizing the smallness of the initial stresses. In the zero approximation, composites without initial stresses are considered; the first approximation takes into account their influence on waves propagation. Two particular cases are considered in detail: laminated media and frame (honeycomb cell) composites. The analyzed frame composite can be used for the modeling of porous media. We select these two cases for the following reasons. First, the laminated and porous material are widely used in practice. Second, for these materials, the homogenized coefficients may be computed in the explicit form for an arbitrary value of the initial stresses. The dependence of the velocity of elastic waves on the initial stresses in laminated and homogeneous bodies differs. The initial tension increases the velocity of elastic waves in both cases, but the quantitative effect of the increase can vary greatly. For frame composites modeling porous bodies, the initial tension can increase or decrease the velocity of elastic waves (the initial tension decreases the velocity of elastic waves in the porous body with an inverted honeycomb periodicity cell). The decrease of the velocity of elastic waves is impossible in homogeneous media. The problem under consideration is related, in particular, to the core sample analysis in the geophysics. This question is discussed in the paper. We also analyzed some features of applications of asymptotic homogenization procedure for the dynamical problem of stressed composite materials, i.e., the nonadditivity of homogenization of sum of operators.

]]>Computation doi: 10.3390/computation11010014

Authors: Elena Veselova Anna Maslovskaya Alexander Chebotarev

The paper is devoted to the theoretical analysis and numerical implementation of a mathematical model of a nonlinear reaction&ndash;diffusion system on the COMSOL Multiphysics platform. The applied problem of the computer simulation of polarization switching in thin ferroelectric films is considered. The model is based on the Landau&ndash;Ginzburg&ndash;Devonshire&ndash;Khalatnikov thermodynamic approach and formalized as an initial-boundary value problem for a semilinear parabolic partial differential equation. The theoretical foundations of the model were explained. The user interface design application was developed with COMSOL Multiphysics. A series of computational experiments was performed to study the ferroelectric hysteresis and temperature dependences of polarization on the example of a ferroelectric barium titanate film.

]]>Computation doi: 10.3390/computation11010013

Authors: Gunjan Singh Arpita Nagpal

One of the effectual text classification approaches for learning extensive information is incremental learning. The big issue that occurs is enhancing the accuracy, as the text is comprised of a large number of terms. In order to address this issue, a new incremental text classification approach is designed using the proposed hybrid optimization algorithm named the Henry Fuzzy Competitive Multi-verse Optimizer (HFCVO)-based Deep Maxout Network (DMN). Here, the optimal features are selected using Invasive Weed Tunicate Swarm Optimization (IWTSO), which is devised by integrating Invasive Weed Optimization (IWO) and the Tunicate Swarm Algorithm (TSA), respectively. The incremental text classification is effectively performed using the DMN, where the classifier is trained utilizing the HFCVO. Nevertheless, the developed HFCVO is derived by incorporating the features of Henry Gas Solubility Optimization (HGSO) and the Competitive Multi-verse Optimizer (CMVO) with fuzzy theory. The proposed HFCVO-based DNM achieved a maximum TPR of 0.968, a maximum TNR of 0.941, a low FNR of 0.032, a high precision of 0.954, and a high accuracy of 0.955.

]]>Computation doi: 10.3390/computation11010012

Authors: Nakyeong Sung Suhwan Kim Namsuk Cho

With the development and proliferation of unmanned weapons systems, path planning is becoming increasingly important. Existing path-planning algorithms mainly assume a well-known environment, and thus pre-planning is desirable, but the actual ground battlefield is uncertain, and numerous contingencies occur. In this study, we present a novel, efficient path-planning algorithm based on a potential field that quickly changes the path in a constantly changing environment. The potential field is composed of a set of functions representing enemy threats and a penalty term representing distance to the target area. We also introduce a new threat function using a multivariate skew-normal distribution that accurately expresses the enemy threat in ground combat.

]]>Computation doi: 10.3390/computation11010011

Authors: Ema Carnia Rinaldi Wilopo Herlina Napitupulu Nursanti Anggriani Asep K. Supriatna

In max-plus algebra, some algorithms for determining the eigenvector of irreducible matrices are the power algorithm and the Kleene star algorithm. In this research, a modified Kleene star algorithm will be discussed to compensate for the disadvantages of the Kleene star algorithm. The Kleene star algorithm&rsquo;s time complexity is O(n(n!)), and the new Kleene star algorithm&rsquo;s time complexity is O(n4), while the power algorithm&rsquo;s time complexity cannot be calculated. This research also applies max-plus algebra in a railroad network scheduling problem, constructing a graphical user interface to perform schedule calculations quickly and easily.

]]>Computation doi: 10.3390/computation11010010

Authors: Thitiworada Srisuwandee Sombat Sindhuchao Thitinon Srisuwandee

The disposal of infectious waste remains one of the most severe medical, social, and environmental problems in almost every country. Choosing the right location and arranging the most suitable transport route is one of the main issues in managing hazardous waste. Identifying a site for the disposal of infectious waste is a complicated process because both tangible and intangible factors must be considered together, and it also depends on various rules and regulations. This research aims to solve the problem of the size selection and location of infectious waste incinerators for 109 community hospitals in the upper part of northeastern Thailand by applying a differential evolution algorithm to solve the problem with the objective of minimizing the total system cost, which consists of the cost of transporting infectious waste, the fixed costs, and the variable cost of operating the infectious waste incinerator. The developed differential evolution produces vectors that differ from the conventional differential evolution. Instead of a single set of vectors, three are created to search for the solution. In addition to solving the problem of the case study, this research conducts numerical experiments with randomly generated data to measure the performance of the differential evolution algorithm. The results show that the proposed algorithm efficiently solves the problem and can find the global optimal solution for the problem studied.

]]>Computation doi: 10.3390/computation11010009

Authors: Shambhu Bhattarai Pradeep Mareta Philip W. Crawford Jonathan M. Kessler Christina M. Ragain

The ability of density functional theory (DFT) using the functional B3LYP with the cc-pVTZ basis set to accurately predict the electrochemical properties of 20 3-aryl-quinoxaline-2-carbonitrile 1,4-di-N-oxide derivatives in dimethylformamide (DMF) was investigated and compared to previous predictions from B3LYP/6-31G and B3LYP/lanl2dz. The B3LYP/cc-pVTZ method was an improvement over the B3LYP/6-31G and B3LYP/lanl2dz methods as it was able to predict the first reduction potential of the diazine ring (wave 1) for all of the 3-aryl-quinoxaline-2-carbonitrile 1,4-di-N-oxide derivatives accurately. The B3LYP/cc-pVTZ predicted electrochemical potentials had a strong correlation to experimental values for wave 1. None of the methods demonstrated the ability to predict the nitro wave reduction potential for derivatives containing a nitro group. B3LYP/cc-pVTZ predicted electrochemical potentials for the second reduction of the diazine ring (wave 2) had a low correlation to the experimental values for the derivatives without a nitro group and no correlation of the derivatives when the nitro group was included in the analysis.

]]>