Next Issue
Volume 11, June
Previous Issue
Volume 11, April
 
 

Computation, Volume 11, Issue 5 (May 2023) – 19 articles

Cover Story (view full-size image): In common construction practice, various buildings can be found that consist of a lower, older, reinforced concrete part and a more recent upper steel part, forming a so-called “hybrid” building type. Conventional regulations provide full guidelines for the seismic design of buildings constructed with the same material throughout, “neglecting” the case of vertical hybrid buildings, while limited relative research is available. Here, an effort is made to fill this gap in the knowledge about the behavior of this hybrid building type in sequential earthquakes in the 3D space. In addition, two boundary connections of the steel part upon the r/c one are distinguished for examination in the non-linear time-history analyses, leading to useful conclusions for the safer design of the hybrid buildings. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 6789 KiB  
Article
Opinion Formation on Social Networks—The Effects of Recurrent and Circular Influence
by Vesa Kuikka
Computation 2023, 11(5), 103; https://doi.org/10.3390/computation11050103 - 22 May 2023
Cited by 2 | Viewed by 1432
Abstract
We present a generalised complex contagion model for describing behaviour and opinion spreading on social networks. Recurrent interactions between adjacent nodes and circular influence in loops in the network structure enable the modelling of influence spreading on the network scale. We have presented [...] Read more.
We present a generalised complex contagion model for describing behaviour and opinion spreading on social networks. Recurrent interactions between adjacent nodes and circular influence in loops in the network structure enable the modelling of influence spreading on the network scale. We have presented details of the model in our earlier studies. Here, we focus on the interpretation of the model and discuss its features by using conventional concepts in the literature. In addition, we discuss how the model can be extended to account for specific social phenomena in social networks. We demonstrate the differences between the results of our model and a simple contagion model. Results are provided for a small social network and a larger collaboration network. As an application of the model, we present a method for profiling individuals based on their out-centrality, in-centrality, and betweenness values in the social network structure. These measures have been defined consistently with our spreading model based on an influence spreading matrix. The influence spreading matrix captures the directed spreading probabilities between all node pairs in the network structure. Our results show that recurrent and circular influence has considerable effects on node centrality values and spreading probabilities in the network structure. Full article
(This article belongs to the Special Issue Computational Social Science and Complex Systems)
Show Figures

Figure 1

23 pages, 9773 KiB  
Article
Τhe Behavior of Hybrid Reinforced Concrete-Steel Buildings under Sequential Ground Excitations
by Paraskevi K. Askouni
Computation 2023, 11(5), 102; https://doi.org/10.3390/computation11050102 - 18 May 2023
Cited by 3 | Viewed by 1011
Abstract
In common construction practice, various examples can be found involving a building type consisting of a lower, older, reinforced concrete structure and a more recent upper steel part, forming a so-called “hybrid” building. Conventional seismic design rules give full guidelines for the earthquake [...] Read more.
In common construction practice, various examples can be found involving a building type consisting of a lower, older, reinforced concrete structure and a more recent upper steel part, forming a so-called “hybrid” building. Conventional seismic design rules give full guidelines for the earthquake design of buildings constructed with the same material throughout. The current seismic codes neglect to provide specific design and detailing guidelines for vertical hybrid buildings and limited existing research is available in the literature, thus leaving a scientific gap that needs to be investigated. In the present work, an effort is made to fill this gap in the knowledge about the behavior of this hybrid building type in sequential earthquakes, which are found in the literature to burden the seismic structural response. Three-dimensional models of hybrid reinforced concrete–steel frames are exposed to sequential ground excitations in horizontal and vertical directions while considering the elastoplastic behavior of these structural elements in the time domain. The lower reinforced concrete parts of the hybrid buildings are detailed here as corresponding to a former structure by a simple approximation. In addition, two boundary connections of the structural steel part upon the r/c part are distinguished for examination in the elastoplastic analyses. Comparisons of the arithmetical analysis results of the hybrid frames for the examined connections are carried out. The seismic response plots of the current non-linear dynamic time-domain analyses of the 3D hybrid frames subjected to sequential ground excitations yield useful conclusions to provide guidelines for a safer seismic design of the hybrid building type, which is not covered by the current codes despite being a common practice. Full article
Show Figures

Figure 1

11 pages, 410 KiB  
Article
Social Learning and the Exploration-Exploitation Tradeoff
by Brian Mintz and Feng Fu
Computation 2023, 11(5), 101; https://doi.org/10.3390/computation11050101 - 18 May 2023
Viewed by 1086
Abstract
Cultures around the world show varying levels of conservatism. While maintaining traditional ideas prevents wrong ones from being embraced, it also slows or prevents adaptation to new times. Without exploration there can be no improvement, but often this effort is wasted as it [...] Read more.
Cultures around the world show varying levels of conservatism. While maintaining traditional ideas prevents wrong ones from being embraced, it also slows or prevents adaptation to new times. Without exploration there can be no improvement, but often this effort is wasted as it fails to produce better results, making it better to exploit the best known option. This tension is known as the exploration/exploitation issue, and it occurs at the individual and group levels, whenever decisions are made. As such, it is has been investigated across many disciplines. We extend previous work by approximating a continuum of traits under local exploration, employing the method of adaptive dynamics, and studying multiple fitness functions. In this work, we ask how nature would solve the exploration/exploitation issue, by allowing natural selection to operate on an exploration parameter in a variety of contexts, thinking of exploration as mutation in a trait space with a varying fitness function. Specifically, we study how exploration rates evolve by applying adaptive dynamics to the replicator-mutator equation, under two types of fitness functions. For the first, payoffs are accrued from playing a two-player, two-action symmetric game, we consider representatives of all games in this class, including the Prisoner’s Dilemma, Hawk-Dove, and Stag Hunt games, finding exploration rates often evolve downwards, but can also undergo neutral selection as well depending on the games parameters or initial conditions. Second, we study time dependent fitness with a function having a single oscillating peak. By increasing the period, we see a jump in the optimal exploration rate, which then decreases towards zero as the frequency of environmental change increases. These results establish several possible evolutionary scenarios for exploration rates, providing insight into many applications, including why we can see such diversity in rates of cultural change. Full article
(This article belongs to the Special Issue Computational Social Science and Complex Systems)
Show Figures

Figure 1

14 pages, 5813 KiB  
Article
Development of Trading Strategies Using Time Series Based on Robust Interval Forecasts
by Evgeny Nikulchev and Alexander Chervyakov
Computation 2023, 11(5), 99; https://doi.org/10.3390/computation11050099 - 15 May 2023
Cited by 2 | Viewed by 2137
Abstract
The task of time series forecasting is to estimate future values based on available observational data. Prediction Intervals methods are aimed at finding not the next point, but the interval that the future value or several values on the forecast horizon can fall [...] Read more.
The task of time series forecasting is to estimate future values based on available observational data. Prediction Intervals methods are aimed at finding not the next point, but the interval that the future value or several values on the forecast horizon can fall into given current and historical data. This article proposes an approach for modeling a robust interval forecast for a stock portfolio. Here, a trading strategy was developed to profit from trading stocks in the market. The study used real trading data of real stocks. Forty securities were used to calculate the IMOEX. The securities with the highest weight were the following: GAZP, LKOH, SBER. This definition of the strategy allows operating with large portfolios. Increasing the accuracy of the forecast was carried out by estimating the interval of the forecast. Here, a range of values was considered to be a result of forecasting without considering specific moments, which guarantees the reliability of the forecast. The use of a predictive interval approach for the price of shares allows increasing their profitability. Full article
Show Figures

Figure 1

20 pages, 10068 KiB  
Article
A Multiple Response Prediction Model for Dissimilar AA-5083 and AA-6061 Friction Stir Welding Using a Combination of AMIS and Machine Learning
by Rungwasun Kraiklang, Chakat Chueadee, Ganokgarn Jirasirilerd, Worapot Sirirak and Sarayut Gonwirat
Computation 2023, 11(5), 100; https://doi.org/10.3390/computation11050100 - 15 May 2023
Cited by 2 | Viewed by 1172
Abstract
This study presents a methodology that combines artificial multiple intelligence systems (AMISs) and machine learning to forecast the ultimate tensile strength (UTS), maximum hardness (MH), and heat input (HI) of AA-5083 and AA-6061 friction stir welding. The machine learning model integrates two machine [...] Read more.
This study presents a methodology that combines artificial multiple intelligence systems (AMISs) and machine learning to forecast the ultimate tensile strength (UTS), maximum hardness (MH), and heat input (HI) of AA-5083 and AA-6061 friction stir welding. The machine learning model integrates two machine learning methods, Gaussian process regression (GPR) and a support vector machine (SVM), into a single model, and then uses the AMIS as the decision fusion strategy to merge SVM and GPR. The generated model was utilized to anticipate three objectives based on seven controlled/input parameters. These parameters were: tool tilt angle, rotating speed, travel speed, shoulder diameter, pin geometry, type of reinforcing particles, and tool pin movement mechanism. The effectiveness of the model was evaluated using a two-experiment framework. In the first experiment, we used two newly produced datasets, (1) the 7PI-V1 dataset and (2) the 7PI-V2 dataset, and compared the results with state-of-the-art approaches. The second experiment used existing datasets from the literature with varying base materials and parameters. The computational results revealed that the proposed method produced more accurate prediction results than the previous methods. For all datasets, the proposed strategy outperformed existing methods and state-of-the-art processes by an average of 1.35% to 6.78%. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

23 pages, 7260 KiB  
Article
Reconstruction of Meteorological Records by Methods Based on Dimension Reduction of the Predictor Dataset
by Carlos Balsa, Murilo M. Breve, Carlos V. Rodrigues and José Rufino
Computation 2023, 11(5), 98; https://doi.org/10.3390/computation11050098 - 12 May 2023
Viewed by 1220
Abstract
The reconstruction or prediction of meteorological records through the Analog Ensemble (AnEn) method is very efficient when the number of predictor time series is small. Thus, in order to take advantage of the richness and diversity of information contained in a large number [...] Read more.
The reconstruction or prediction of meteorological records through the Analog Ensemble (AnEn) method is very efficient when the number of predictor time series is small. Thus, in order to take advantage of the richness and diversity of information contained in a large number of predictors, it is necessary to reduce their dimensions. This study presents methods to accomplish such reduction, allowing the use of a high number of predictor variables. In particular, the techniques of Principal Component Analysis (PCA) and Partial Least Squares (PLS) are used to reduce the dimension of the predictor dataset without loss of essential information. The combination of the AnEn and PLS techniques results in a very efficient hybrid method (PLSAnEn) for reconstructing or forecasting unstable meteorological variables, such as wind speed. This hybrid method is computationally demanding but its performance can be improved via parallelization or the introduction of variants in which all possible analogs are previously clustered. The multivariate linear regression methods used on the new variables resulting from the PCA or PLS techniques also proved to be efficient, especially for the prediction of meteorological variables without local oscillations, such as the pressure. Full article
Show Figures

Figure 1

15 pages, 1158 KiB  
Article
A Flexible and General-Purpose Platform for Heterogeneous Computing
by Jose Juan Garcia-Hernandez, Miguel Morales-Sandoval and Erick Elizondo-Rodríguez
Computation 2023, 11(5), 97; https://doi.org/10.3390/computation11050097 - 11 May 2023
Cited by 1 | Viewed by 1358
Abstract
In the big data era, processing large amounts of data imposes several challenges, mainly in terms of performance. Complex operations in data science, such as deep learning, large-scale simulations, and visualization applications, can consume a significant amount of computing time. Heterogeneous computing is [...] Read more.
In the big data era, processing large amounts of data imposes several challenges, mainly in terms of performance. Complex operations in data science, such as deep learning, large-scale simulations, and visualization applications, can consume a significant amount of computing time. Heterogeneous computing is an attractive alternative for algorithm acceleration, using not one but several different kinds of computing devices (CPUs, GPUs, or FPGAs) simultaneously. Accelerating an algorithm for a specific device under a specific framework, i.e., CUDA/GPU, provides a solution with the highest possible performance at the cost of a loss in generality and requires an experienced programmer. On the contrary, heterogeneous computing allows one to hide the details pertaining to the simultaneous use of different technologies in order to accelerate computation. However, effective heterogeneous computing implementation still requires mastering the underlying design flow. Aiming to fill this gap, in this paper we present a heterogeneous computing platform (HCP). Regarding its main features, this platform allows non-experts in heterogeneous computing to deploy, run, and evaluate high-computational-demand algorithms following a semi-automatic design flow. Given the implementation of an algorithm in C with minimal format requirements, the platform automatically generates the parallel code using a code analyzer, which is adapted to target a set of available computing devices. Thus, while an experienced heterogeneous computing programmer is not required, the process can run over the available computing devices on the platform as it is not an ad hoc solution for a specific computing device. The proposed HCP relies on the OpenCL specification for interoperability and generality. The platform was validated and evaluated in terms of generality and efficiency through a set of experiments using the algorithms of the Polybench/C suite (version 3.2) as the input. Different configurations for the platform were used, considering CPUs only, GPUs only, and a combination of both. The results revealed that the proposed HCP was able to achieve accelerations of up to 270× for specific classes of algorithms, i.e., parallel-friendly algorithms, while its use required almost no expertise in either OpenCL or heterogeneous computing from the programmer/end-user. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

0 pages, 3704 KiB  
Article
Diabetes Classification Using Machine Learning Techniques
by Methaporn Phongying and Sasiprapa Hiriote
Computation 2023, 11(5), 96; https://doi.org/10.3390/computation11050096 - 10 May 2023
Cited by 5 | Viewed by 2960
Abstract
Machine learning techniques play an increasingly prominent role in medical diagnosis. With the use of these techniques, patients’ data can be analyzed to find patterns or facts that are difficult to explain, making diagnoses more reliable and convenient. The purpose of this research [...] Read more.
Machine learning techniques play an increasingly prominent role in medical diagnosis. With the use of these techniques, patients’ data can be analyzed to find patterns or facts that are difficult to explain, making diagnoses more reliable and convenient. The purpose of this research was to compare the efficiency of diabetic classification models using four machine learning techniques: decision trees, random forests, support vector machines, and K-nearest neighbors. In addition, new diabetic classification models are proposed that incorporate hyperparameter tuning and the addition of some interaction terms into the models. These models were evaluated based on accuracy, precision, recall, and the F1-score. The results of this study show that the proposed models with interaction terms have better classification performance than those without interaction terms for all four machine learning techniques. Among the proposed models with interaction terms, random forest classifiers had the best performance, with 97.5% accuracy, 97.4% precision, 96.6% recall, and a 97% F1-score. The findings from this study can be further developed into a program that can effectively screen potential diabetes patients. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Graphical abstract

15 pages, 770 KiB  
Article
A Two-Step Machine Learning Method for Predicting the Formation Energy of Ternary Compounds
by Varadarajan Rengaraj, Sebastian Jost, Franz Bethke, Christian Plessl, Hossein Mirhosseini, Andrea Walther and Thomas D. Kühne
Computation 2023, 11(5), 95; https://doi.org/10.3390/computation11050095 - 09 May 2023
Viewed by 2235
Abstract
Predicting the chemical stability of yet-to-be-discovered materials is an important aspect of the discovery and development of virtual materials. The conventional approach for computing the enthalpy of formation based on ab initio methods is time consuming and computationally demanding. In this regard, alternative [...] Read more.
Predicting the chemical stability of yet-to-be-discovered materials is an important aspect of the discovery and development of virtual materials. The conventional approach for computing the enthalpy of formation based on ab initio methods is time consuming and computationally demanding. In this regard, alternative machine learning approaches are proposed to predict the formation energies of different classes of materials with decent accuracy. In this paper, one such machine learning approach, a novel two-step method that predicts the formation energy of ternary compounds, is presented. In the first step, with a classifier, we determine the accuracy of heuristically calculated formation energies in order to increase the size of the training dataset for the second step. The second step is a regression model that predicts the formation energy of the ternary compounds. The first step leads to at least a 100% increase in the size of the dataset with respect to the data available in the Materials Project database. The results from the regression model match those from the existing state-of-the-art prediction models. In addition, we propose a slightly modified version of the Adam optimizer, namely centered Adam, and report the results from testing the centered Adam optimizer. Full article
(This article belongs to the Section Computational Chemistry)
Show Figures

Figure 1

21 pages, 1245 KiB  
Article
An Algebraic Approach to the Solutions of the Open Shop Scheduling Problem
by Agustín Moreno Cañadas, Odette M. Mendez, Juan-Carlos Riaño-Rojas and Juan-David Hormaza
Computation 2023, 11(5), 94; https://doi.org/10.3390/computation11050094 - 08 May 2023
Viewed by 1572
Abstract
The open shop scheduling problem (OSSP) is one of the standard scheduling problems. It consists of scheduling jobs associated with a finite set of tasks developed by different machines. In this case, each machine processes at most one operation at a time, and [...] Read more.
The open shop scheduling problem (OSSP) is one of the standard scheduling problems. It consists of scheduling jobs associated with a finite set of tasks developed by different machines. In this case, each machine processes at most one operation at a time, and the job processing order on the machines does not matter. The goal is to determine the completion times of the operations processed on the machines to minimize the largest job completion time, called Cmax. This paper proves that each OSSP has associated a path algebra called Brauer configuration algebra whose representation theory (particularly its dimension and the dimension of its center) can be given using the corresponding Cmax value. It has also been proved that the dimension of the centers of Brauer configuration algebras associated with OSSPs with minimal Cmax are congruent modulo the number of machines. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

14 pages, 1372 KiB  
Article
A Novel Finite Element Model for the Study of Harmful Vibrations on the Aging Spine
by Shivam Verma, Gurpreet Singh and Arnab Chanda
Computation 2023, 11(5), 93; https://doi.org/10.3390/computation11050093 - 05 May 2023
Cited by 1 | Viewed by 1101
Abstract
The human spine is susceptible to a wide variety of adverse consequences from vibrations, including lower back discomfort. These effects are often seen in the drivers of vehicles, earth-moving equipment, and trucks, and also in those who drive for long hours in general. [...] Read more.
The human spine is susceptible to a wide variety of adverse consequences from vibrations, including lower back discomfort. These effects are often seen in the drivers of vehicles, earth-moving equipment, and trucks, and also in those who drive for long hours in general. The human spine is composed of vertebrae, discs, and tissues that work together to provide it with a wide range of movements and significant load-carrying capability needed for daily physical exercise. However, there is a limited understanding of vibration characteristics in different age groups and the effect of vibration transmission in the spinal column, which may be harmful to the different sections. In this work, a novel finite element model (FEM) was developed to study the variation of vibration absorption capacity due to the aging effect of the different sections of the human spine. These variations were observed from the first three natural frequencies of the human spine structure, which were obtained by solving the eigenvalue problem of the novel finite element model for different ages. From the results, aging was observed to lead to an increase in the natural frequencies of all three spinal segments. As the age increased beyond 30 years, the natural frequency significantly increased for the thoracic segment, compared to lumber and cervical segments. A range of such novel findings indicated the harmful frequencies at which resonance may occur, causing spinal pain and possible injuries. This information would be indispensable for spinal surgeons for the prognosis of spinal column injury (SCI) patients affected by harmful vibrations from workplaces, as well as manufacturers of automotive and aerospace equipment for designing effective dampers for better whole-body vibration mitigation. Full article
(This article belongs to the Special Issue Application of Finite Element Methods)
Show Figures

Figure 1

14 pages, 949 KiB  
Article
The Cost of Understanding—XAI Algorithms towards Sustainable ML in the View of Computational Cost
by Claire Jean-Quartier, Katharina Bein, Lukas Hejny, Edith Hofer, Andreas Holzinger and Fleur Jeanquartier
Computation 2023, 11(5), 92; https://doi.org/10.3390/computation11050092 - 04 May 2023
Cited by 1 | Viewed by 2360
Abstract
In response to socioeconomic development, the number of machine learning applications has increased, along with the calls for algorithmic transparency and further sustainability in terms of energy efficient technologies. Modern computer algorithms that process large amounts of information, particularly artificial intelligence methods and [...] Read more.
In response to socioeconomic development, the number of machine learning applications has increased, along with the calls for algorithmic transparency and further sustainability in terms of energy efficient technologies. Modern computer algorithms that process large amounts of information, particularly artificial intelligence methods and their workhorse machine learning, can be used to promote and support sustainability; however, they consume a lot of energy themselves. This work focuses and interconnects two key aspects of artificial intelligence regarding the transparency and sustainability of model development. We identify frameworks for measuring carbon emissions from Python algorithms and evaluate energy consumption during model development. Additionally, we test the impact of explainability on algorithmic energy consumption during model optimization, particularly for applications in health and, to expand the scope and achieve a widespread use, civil engineering and computer vision. Specifically, we present three different models of classification, regression and object-based detection for the scenarios of cancer classification, building energy, and image detection, each integrated with explainable artificial intelligence (XAI) or feature reduction. This work can serve as a guide for selecting a tool to measure and scrutinize algorithmic energy consumption and raise awareness of emission-based model optimization by highlighting the sustainability of XAI. Full article
(This article belongs to the Special Issue Intelligent Computing, Modeling and its Applications)
Show Figures

Figure 1

39 pages, 6696 KiB  
Article
Marine Predators Algorithm for Sizing Optimization of Truss Structures with Continuous Variables
by Rafiq Bodalal and Farag Shuaeib
Computation 2023, 11(5), 91; https://doi.org/10.3390/computation11050091 - 30 Apr 2023
Cited by 3 | Viewed by 1749
Abstract
In this study, the newly developed Marine Predators Algorithm (MPA) is formulated to minimize the weight of truss structures. MPA is a swarm-based metaheuristic algorithm inspired by the efficient foraging strategies of marine predators in oceanic environments. In order to assess the robustness [...] Read more.
In this study, the newly developed Marine Predators Algorithm (MPA) is formulated to minimize the weight of truss structures. MPA is a swarm-based metaheuristic algorithm inspired by the efficient foraging strategies of marine predators in oceanic environments. In order to assess the robustness of the proposed method, three normal-sized structural benchmarks (10-bar, 60-bar, and 120-bar spatial dome) and three large-scale structures (272-bar, 942-bar, and 4666-bar truss tower) were selected from the literature. Results point to the inherent strength of MPA against all state-of-the-art metaheuristic optimizers implemented so far. Moreover, for the first time in the field, a quantitative evaluation and an answer to the age-old question of the proper convergence behavior (exploration vs. exploitation balance) in the context of structural optimization is conducted. Therefore, a novel dimension-wise diversity index is adopted as a methodology to investigate each of the two schemes. It was concluded that the balance that produced the best results was about 90% exploitation and 10% exploration (on average for the entire computational process). Full article
Show Figures

Figure 1

34 pages, 6703 KiB  
Article
Informal Sector, ICT Dynamics, and the Sovereign Cost of Debt: A Machine Learning Approach
by Apostolos Kotzinos, Vasilios Canellidis and Dimitrios Psychoyios
Computation 2023, 11(5), 90; https://doi.org/10.3390/computation11050090 - 28 Apr 2023
Viewed by 1648
Abstract
We examine the main effects of ICT penetration and the shadow economy on sovereign credit ratings and the cost of debt, along with possible second-order effects between the two variables, on a dataset of 65 countries from 2001 to 2016. The paper presents [...] Read more.
We examine the main effects of ICT penetration and the shadow economy on sovereign credit ratings and the cost of debt, along with possible second-order effects between the two variables, on a dataset of 65 countries from 2001 to 2016. The paper presents a range of machine-learning approaches, including bagging, random forests, gradient-boosting machines, and recurrent neural networks. Furthermore, following recent trends in the emerging field of interpretable ML, based on model-agnostic methods such as feature importance and accumulated local effects, we attempt to explain which factors drive the predictions of the so-called ML black box models. We show that policies facilitating the penetration and use of ICT and aiming to curb the shadow economy may exert an asymmetric impact on sovereign ratings and the cost of debt depending on their present magnitudes, not only independently but also in interaction. Full article
(This article belongs to the Special Issue Quantitative Finance and Risk Management Research)
Show Figures

Graphical abstract

16 pages, 3089 KiB  
Article
Preemptive Priority Markovian Queue Subject to Server Breakdown with Imperfect Coverage and Working Vacation Interruption
by Tzu-Hsin Liu, He-Yao Hsu, Jau-Chuan Ke and Fu-Min Chang
Computation 2023, 11(5), 89; https://doi.org/10.3390/computation11050089 - 27 Apr 2023
Cited by 3 | Viewed by 1151
Abstract
This work considers a preemptive priority queueing system with vacation, where the single server may break down with imperfect coverage. Various combinations of server vacation priority queueing models have been studied by many scholars. A common assumption in these models is that the [...] Read more.
This work considers a preemptive priority queueing system with vacation, where the single server may break down with imperfect coverage. Various combinations of server vacation priority queueing models have been studied by many scholars. A common assumption in these models is that the server will only resume its normal service rate after the vacation is over. However, such speculation is more limited in real-world situations. Hence, in this study, the vacation will be interrupted if a customer waits for service in the system at the moment of completion of service during vacation. The stationary probability distribution is derived by using the probability generating function approach. We also develop varieties of performance measures and provide a simple numerical example to illustrate these measures. Optimization analysis is finally carried out, including cost optimization and tri-object optimization. Full article
Show Figures

Figure 1

10 pages, 2005 KiB  
Communication
Understanding Antioxidant Abilities of Dihydroxybenzenes: Local and Global Electron Transfer Properties
by Priyanka Chauhan, Gururaj Kudur Jayaprakash, Isha Soni, Mamta Sharma, Juan Pablo Mojica-Sànchez, Shashanka Rajendrachari and Praveen Naik
Computation 2023, 11(5), 88; https://doi.org/10.3390/computation11050088 - 26 Apr 2023
Cited by 1 | Viewed by 1727
Abstract
In the current work, globally based on Koopmans’ approximation, local electron transport characteristics of dihydroxybenzenes have been examined using the density functional theory for understanding their antioxidant activity. Our experimental and theoretical studies show that hydroquinone has better antioxidant activities when compared to [...] Read more.
In the current work, globally based on Koopmans’ approximation, local electron transport characteristics of dihydroxybenzenes have been examined using the density functional theory for understanding their antioxidant activity. Our experimental and theoretical studies show that hydroquinone has better antioxidant activities when compared to resorcinol and catechol. To identify the antioxidant sites for each dihydroxybenzene molecule, an average analytical Fukui analysis was used. The typical Fukui analytical results demonstrate that dihydroxybenzene oxygen atoms serve as antioxidant sites. The experimental and theoretical results are in good agreement with each other; therefore, our results are reliable. Full article
Show Figures

Figure 1

15 pages, 9958 KiB  
Article
Topological Optimization of Interconnection of Multilayer Composite Structures
by P. V. Dunchenkin, V. A. Cherekaeva, T. V. Yakovleva and A. V. Krysko
Computation 2023, 11(5), 87; https://doi.org/10.3390/computation11050087 - 25 Apr 2023
Viewed by 1130
Abstract
This study focuses on the topological optimization of adhesive overlap joints for structures subjected to longitudinal mechanical loads. The aim is to reduce peak stresses at the joint interface of the elements. Peak stresses in such joints can lead to failure of both [...] Read more.
This study focuses on the topological optimization of adhesive overlap joints for structures subjected to longitudinal mechanical loads. The aim is to reduce peak stresses at the joint interface of the elements. Peak stresses in such joints can lead to failure of both the joint and the structure itself. A new approach based on Rational Approximation of Material Properties (RAMP) and the Finite Element Method (FEM) has been proposed to minimize peak stresses in multi-layer composite joints. Using this approach, the Mises peak stresses of the optimal structural joint have been significantly reduced by up to 50% under mechanical loading in the longitudinal direction. The paper includes numerical examples of different types of structural element connections. Full article
(This article belongs to the Special Issue Application of Finite Element Methods)
Show Figures

Figure 1

15 pages, 8393 KiB  
Article
Proposal for the Clustering of Characteristics to Identify Emotions in the Development of a Foreign Language Exam
by Carlos Montenegro, Víctor Medina and Helbert Espitia
Computation 2023, 11(5), 86; https://doi.org/10.3390/computation11050086 - 24 Apr 2023
Viewed by 1061
Abstract
Automatic emotion identification allows for obtaining information on emotions experienced by an individual during certain activities, which is essential for improving their performance or preparing for similar experiences. This document aims to establish the clusters of variables associated with the identification of emotions [...] Read more.
Automatic emotion identification allows for obtaining information on emotions experienced by an individual during certain activities, which is essential for improving their performance or preparing for similar experiences. This document aims to establish the clusters of variables associated with the identification of emotions when a group of students takes a foreign language exam in Portuguese. Once the data clusters are determined, it is possible to establish the perception of emotions in the students with relevant variables and their respective decision thresholds. This study can later be used to build a model that relates the measured variables and the student’s performance so that strategies can be generated to help the student achieve better results on the test. The results indicate that the clusters and range values of the variables can be obtained to observe changes in the concentration of the students. This preliminary information can be used to design a fuzzy inference system to identify the student’s state of concentration. Full article
Show Figures

Figure 1

23 pages, 6080 KiB  
Article
Modeling of Nonlinear Dynamic Processes of Human Movement in Virtual Reality Based on Digital Shadows
by Artem Obukhov, Denis Dedov, Andrey Volkov and Daniil Teselkin
Computation 2023, 11(5), 85; https://doi.org/10.3390/computation11050085 - 23 Apr 2023
Cited by 3 | Viewed by 1576
Abstract
In virtual reality (VR) systems, a problem is the accurate reproduction of the user’s body in a virtual environment using inverse kinematics because existing motion capture systems have a number of drawbacks, and minimizing the number of key tracking points (KTPs) leads to [...] Read more.
In virtual reality (VR) systems, a problem is the accurate reproduction of the user’s body in a virtual environment using inverse kinematics because existing motion capture systems have a number of drawbacks, and minimizing the number of key tracking points (KTPs) leads to a large error. To solve this problem, it is proposed to use the concept of a digital shadow and machine learning technologies to optimize the number of KTPs. A technique for movement process data collecting from a virtual avatar is implemented, modeling of nonlinear dynamic processes of human movement based on a digital shadow is carried out, the problem of optimizing the number of KTP is formulated, and an overview of the applied machine learning algorithms and metrics for their evaluation is given. An experiment on a dataset formed from virtual avatar movements shows the following results: three KTPs do not provide sufficient reconstruction accuracy, the choice of five or seven KTPs is optimal; among the algorithms, the most efficient in descending order are AdaBoostRegressor, LinearRegression, and SGDRegressor. During the reconstruction using AdaBoostRegressor, the maximum deviation is not more than 0.25 m, and the average is not more than 0.10 m. Full article
(This article belongs to the Special Issue Mathematical Modeling and Study of Nonlinear Dynamic Processes)
Show Figures

Graphical abstract

Previous Issue
Next Issue
Back to TopTop