Mathematical and Computational Applications doi: 10.3390/mca28030072

Authors: José-Luis Llaguno-Roque Rocio-Erandi Barrientos-Martínez Héctor-Gabriel Acosta-Mesa Tania Romo-González Efrén Mezura-Montes

Breast cancer has become a global health problem, ranking first in incidences and fifth in mortality in women around the world. In Mexico, the first cause of death in women is breast cancer. This work uses deep learning techniques to discriminate between healthy and breast cancer patients, based on the banding patterns obtained from the Western Blot strip images of the autoantibody response to antigens of the T47D tumor line. The reaction of antibodies to tumor antigens occurs early in the process of tumorigenesis, years before clinical symptoms. One of the main challenges in deep learning is the design of the architecture of the convolutional neural network. Neuroevolution has been used to support this and has produced highly competitive results. It is proposed that neuroevolve convolutional neural networks (CNN) find an optimal architecture to achieve competitive ranking, taking Western Blot images as input. The CNN obtained reached 90.67% accuracy, 90.71% recall, 95.34% specificity, and 90.69% precision in classifying three different classes (healthy, benign breast pathology, and breast cancer).

]]>Mathematical and Computational Applications doi: 10.3390/mca28030071

Authors: Marcela Quiroz-Castellanos Luis Gerardo de la Fraga Adriana Lara Leonardo Trujillo Oliver Schütze

This Special Issue was inspired by the 9th International Workshop on Numerical and Evolutionary Optimization (NEO 2021) held&mdash;due to the COVID-19 pandemic&mdash;as an online-only event from 8 to 10 September 2021 [...]

]]>Mathematical and Computational Applications doi: 10.3390/mca28030070

Authors: Mark Pollicott Julia Slipantschuk

We establish rigorous estimates for the Hausdorff dimension of the spectra of Laplacians associated with Sierpi&#324;ski lattices and infinite Sierpi&#324;ski gaskets and other post-critically finite self-similar sets.

]]>Mathematical and Computational Applications doi: 10.3390/mca28030069

Authors: Sebastian Stark

Robust and computationally efficient numeric algorithms are required to simulate the sintering process of complex ceramic components by means of the finite element method. This work focuses on a thermodynamically consistent sintering model capturing the effects of both, viscosity and elasticity, within the standard dissipative framework. In particular, the temporal integration of the model by means of several implicit first and second order accurate one step time integration methods is discussed. It is shown in terms of numerical experiments on the material point level that the first order schemes exhibit poor performance when compared to second order schemes. Further numerical experiments indicate that the results translate directly to finite element simulations.

]]>Mathematical and Computational Applications doi: 10.3390/mca28030068

Authors: Martin Philip Venter Naudé Thomas Conradie

This paper introduced a comparison method for three explicitly defined intermediate encoding methods in generative design for two-dimensional soft robotic units. This study evaluates a conventional genetic algorithm with full access to removing elements from the design domain using an implicit random encoding layer, a Lindenmayer system encoding mimicking biological growth patterns and a compositional pattern producing network encoding for 2D pattern generation. The objective of the optimisation problem is to match the deformation of a single actuator unit with a desired target shape, specifically uni-axial elongation, under internal pressure. The study results suggest that the Lindenmayer system encoding generates candidate units with fewer function evaluations than the traditional implicitly encoded genetic algorithm. However, the distribution of constraint and internal energy is similar to that of the random encoding, and the Lindenmayer system encoding produces a less diverse population of candidate units. In contrast, despite requiring more function evaluations than the Lindenmayer System encoding, the Compositional Pattern Producing Network encoding produces a similar diversity of candidate units. Overall, the Compositional Pattern Producing Network encoding results in a proportionally higher number of high-performing units than the random or Lindenmayer system encoding, making it a viable alternative to a conventional monolithic approach. The results suggest that the compositional pattern producing network encoding may be a promising approach for designing soft robotic actuators with desirable performance characteristics.

]]>Mathematical and Computational Applications doi: 10.3390/mca28030067

Authors: Luis Víctor Maidana Benítez Melisa María Rosa Villamayor Paredes José Colbes César F. Bogado-Martínez Benjamin Barán Diego P. Pinto-Roa

This paper addresses serialized approaches of the routing, modulation level, and spectrum assignment (RMLSA) problem in elastic optical networks, using multiple sequential sub-sets of requests, under Integer Linear Programming (ILP). The literature has reported two-stage serial optimization methods referred to as RML+SA, which retain computational efficiency when the problem grows, compared to the classical one-stage RMLSA optimization approach. However, there still remain numerous issues in terms of the spectrum used that can be improved when compared to the RMLSA solution. Consequently, this paper proposes RML+SA solutions considering multiple sequential sub-sets of requests, split traffic flow, as well as path-oriented and link-oriented routing models. Simulation results on different test scenarios determine that: (a) the multiple sequential sub-sets of request-based models improve computation time without worsening the spectrum usage when compared to just one set of requests optimization approaches, (b) divisible traffic flow approaches show promise in cases where the number of request sub-sets is low compared to the non-divisible counterpart, and (c) path-oriented routing succeeds in improving the used spectrum by increasing the number of candidate routes compared to link-oriented routing.

]]>Mathematical and Computational Applications doi: 10.3390/mca28030066

Authors: Adam Aharony Ron Hindi Maor Valdman Shai Gul

Images or paintings with homogeneous colors may appear dull to the naked eye; however, there may be numerous details in the image that are expressed through subtle changes in color. This manuscript introduces a novel approach that can uncover these concealed details via a transformation that increases the distance between adjacent pixels, ultimately leading to a newly modified version of the input image. We chose the artworks of Mark Rothko&mdash;famous for their simplicity and limited color palette&mdash;as a case study. Our approach offers a different perspective, leading to the discovery of either accidental or deliberate clusters of colors. Our method is based on the quaternion ring, wherein a suitable multiplication can be used to boost the color difference between neighboring pixels, thereby unveiling new details in the image. The quality of the transformation between the original image and the resultant versions can be measured by the ratio between the number of connected components in the original image (m) and the number of connected components in the output versions (n), which usually satisfies nm&#8811;1. Although this procedure has been employed as a case study for artworks, it can be applied to any type of image with a similar simplicity and limited color palette.

]]>Mathematical and Computational Applications doi: 10.3390/mca28030065

Authors: Boris Solomyak

This is a brief survey of selected results obtained using the &ldquo;transversality method&rdquo; developed for studying parametrized families of fractal sets and measures. We mostly focus on the early development of the theory, restricting ourselves to self-similar and self-conformal iterated function systems.

]]>Mathematical and Computational Applications doi: 10.3390/mca28030064

Authors: Khanyisani Mhlangano Makhanya Simon Connell Muaaz Bhamjee Neil Martinson

Pulmonary diseases are a leading cause of illness and disability globally. While having access to hospitals or specialist clinics for investigations is currently the usual way to characterize the patient&rsquo;s condition, access to medical services is restricted in less resourced settings. We posit that pulmonary disease may impact on vocalization which could aid in characterizing a pulmonary condition. We therefore propose a new method to diagnose pulmonary disease analyzing the vocal and cough changes of a patient. Computational fluid dynamics holds immense potential for assessing the flow-induced acoustics in the lungs. The aim of this study is to investigate the potential of flow-induced vocal-, cough-, and lung-generated acoustics to diagnose lung conditions using computational fluid dynamics methods. In this study, pneumonia is the model disease which is studied. The hypothesis is that using a computational fluid dynamics model for assessing the flow-induced acoustics will accurately represent the flow-induced acoustics for healthy and infected lungs and that possible modeled difference in fluid and acoustic behavior between these pathologies will be tested and described. Computational fluid dynamics and a lung geometry will be used to simulate the flow distribution and obtain the acoustics for the different scenarios. The results suggest that it is possible to determine the difference in vocalization between healthy lungs and those with pneumonia, using computational fluid dynamics, as the flow patterns and acoustics differ. Our results suggest there is potential for computational fluid dynamics to enhance understanding of flow-induced acoustics that could be characteristic of different lung pathologies. Such simulations could be repeated using machine learning with the final objective to use telemedicine to triage or diagnose patients with respiratory illness remotely.

]]>Mathematical and Computational Applications doi: 10.3390/mca28030063

Authors: Marc Girondot Jon Barry

The distribution of the sum of negative binomial random variables has a special role in insurance mathematics, actuarial sciences, and ecology. Two methods to estimate this distribution have been published: a finite-sum exact expression and a series expression by convolution. We compare both methods, as well as a new normalized saddlepoint approximation, and normal and single distribution negative binomial approximations. We show that the exact series expression used lots of memory when the number of random variables was high (&gt;7). The normalized saddlepoint approximation gives an output with a high relative error (around 3&ndash;5%), which can be a problem in some situations. The convolution method is a good compromise for applied practitioners, considering the amount of memory used, the computing time, and the precision of the estimates. However, a simplistic implementation of the algorithm could produce incorrect results due to the non-monotony of the convergence rate. The tolerance limit must be chosen depending on the expected magnitude order of the estimate, for which we used the answer generated by the saddlepoint approximation. Finally, the normal and negative binomial approximations should not be used, as they produced outputs with a very low accuracy.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020062

Authors: Jacques Francois Du Toit Ryno Laubscher

Physics-Informed Neural Networks (PINNs) are a new class of machine learning algorithms that are capable of accurately solving complex partial differential equations (PDEs) without training data. By introducing a new methodology for fluid simulation, PINNs provide the opportunity to address challenges that were previously intractable, such as PDE problems that are ill-posed. PINNs can also solve parameterized problems in a parallel manner, which results in favorable scaling of the associated computational cost. The full potential of the application of PINNs to solving fluid dynamics problems is still unknown, as the method is still in early development: many issues remain to be addressed, such as the numerical stiffness of the training dynamics, the shortage of methods for simulating turbulent flows and the uncertainty surrounding what model hyperparameters perform best. In this paper, we investigated the accuracy and efficiency of PINNs for modeling aortic transvalvular blood flow in the laminar and turbulent regimes, using various techniques from the literature to improve the simulation accuracy of PINNs. Almost no work has been published, to date, on solving turbulent flows using PINNs without training data, as this regime has proved difficult. This paper aims to address this gap in the literature, by providing an illustrative example of such an application. The simulation results are discussed, and compared to results from the Finite Volume Method (FVM). It is shown that PINNs can closely match the FVM solution for laminar flow, with normalized maximum velocity and normalized maximum pressure errors as low as 5.74% and 9.29%, respectively. The simulation of turbulent flow is shown to be a greater challenge, with normalized maximum velocity and normalized maximum pressure errors only as low as 41.8% and 113%, respectively.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020061

Authors: Fernando Camarena Miguel Gonzalez-Mendoza Leonardo Chang Ricardo Cuevas-Ascencio

Artificial intelligence&rsquo;s rapid advancement has enabled various applications, including intelligent video surveillance systems, assisted living, and human&ndash;computer interaction. These applications often require one core task: video-based human action recognition. Research in human video-based human action recognition is vast and ongoing, making it difficult to assess the full scope of available methods and current trends. This survey concisely explores the vision-based human action recognition field and defines core concepts, including definitions and explanations of the common challenges and most used datasets. Additionally, we provide in an easy-to-understand manner the literature approaches and their evolution over time, emphasizing intuitive notions. Finally, we explore current research directions and potential future paths. The core goal of this work is to provide future works with a shared understanding of fundamental ideas and clear intuitions about current works and find new research opportunities.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020060

Authors: Quinn G. Reynolds Thokozile P. Kekana Buhle S. Xakalashe

The application of direct-current plasma arc furnace technology to the problem of coal gasification is investigated using computational multiphysics models of the plasma arc inside such units. An integrated modelling workflow for the study of DC plasma arc discharges in synthesis gas atmospheres is presented. The thermodynamic and transport properties of the plasma are estimated using statistical mechanics calculations and are shown to have highly non-linear dependencies on the gas composition and temperature. A computational magnetohydrodynamic solver for electromagnetically coupled flows is developed and implemented in the OpenFOAM&reg; framework, and the behaviour of three-dimensional transient simulations of arc formation and dynamics is studied in response to different plasma gas compositions and furnace operating conditions. To demonstrate the utility of the methods presented, practical engineering results are obtained from an ensemble of simulation results for a pilot-scale furnace design. These include the stability of the arc under different operating conditions and the dependence of voltage&ndash;current relationships on the arc length, which are relevant in understanding the industrial operability of plasma arc furnaces used for waste coal gasification.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020059

Authors: Daniele Boffi Fabio Credali Lucia Gastaldi Simone Scacchi

We present and analyze a parallel solver for the solution of fluid structure interaction problems described by a fictitious domain approach. In particular, the fluid is modeled by the non-stationary incompressible Navier&ndash;Stokes equations, while the solid evolution is represented by the elasticity equations. The parallel implementation is based on the PETSc library and the solver has been tested in terms of robustness with respect to mesh refinement and weak scalability by running simulations on a Linux cluster.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020058

Authors: Dineo A. Ramatlo Daniel N. Wilke Philip W. Loveday

Guided wave ultrasound (GWU) systems have been widely used for monitoring structures such as rails, pipelines, and plates. In railway tracks, the monitoring process involves the complicated propagation of waves over several hundred meters. The propagating waves are multi-modal and interact with discontinuities differently, increasing complexity and leading to different response signals. When the researcher wants to gain insight into the behavior of guided waves, predicting response signals for different combinations of modes becomes necessary. However, the task can become computationally costly when physics-based models are used. Digital twins can enable a practitioner to deal systematically with the complexities of guided wave monitoring in practical or user-specified settings. This paper investigates the use of a hybrid digital model of an operational rail track to predict response signals for varying user-specified settings, specifically, the prediction of response signals for various combinations of modes of propagation in the rail. The digital twin hybrid model employs a physics-based model and a data-driven model. The physics-based model simulates the wave propagation response using techniques developed from the traditional 3D finite element method and the 2D semi-analytical finite element method (FEM). The physics-based model is used to generate virtual experimental signals containing different combinations of modes of propagation. These response signals are used to train the data-driven model based on a variational auto-encoder (VAE). Given an input baseline signal containing only the most dominant mode excited by a transducer, the VAE is trained to predict an inspection signal with increased complexity according to the specified combination of modes. The results show that, once the VAE has been trained, it can be used to predict inspection signals for different combinations of propagating modes, thus replacing the physics-based model, which is computationally costly. In the future, the VAE architecture will be adapted to predict response signals for varying environmental and operational conditions.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020057

Authors: Johann M. Bouwer Daniel N. Wilke Schalk Kok

This research compares the performance of space-time surrogate models (STSMs) and network surrogate models (NSMs). Specifically, when the system response varies over time (or pseudo-time), the surrogates must predict the system response. A surrogate model is used to approximate the response of computationally expensive spatial and temporal fields resulting from some computational mechanics simulations. Within a design context, a surrogate takes a vector of design variables that describe a current design and returns an approximation of the design&rsquo;s response through a pseudo-time variable. To compare various radial basis function (RBF) surrogate modeling approaches, the prediction of a load displacement path of a snap-through structure is used as an example numerical problem. This work specifically considers the scenario where analytical sensitivities are available directly from the computational mechanics&rsquo; solver and therefore gradient enhanced surrogates are constructed. In addition, the gradients are used to perform a domain transformation preprocessing step to construct surrogate models in a more isotropic domain, which is conducive to RBFs. This work demonstrates that although the gradient-based domain transformation scheme offers a significant improvement to the performance of the space-time surrogate models (STSMs), the network surrogate model (NSM) is far more robust. This research offers explanations for the improved performance of NSMs over STSMs and recommends future research to improve the performance of STSMs.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020056

Authors: Henk Pijls Le Phuong Quan

In this paper, we propose two Maple procedures and some related utilities to determine the maximum curvature of a cubic B&eacute;zier-spline curve that interpolates an ordered set of points in R2 or R3. The procedures are designed from closed-form formulas for such open and closed curves.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020055

Authors: Johannes C. Joubert Daniel N. Wilke Patrick Pizette

This work describes a post-processing scheme for multiphase flow systems to characterize primary atomization. The scheme relies on the 2D fast Fourier transform (FFT) to separate the inherently multi-scale features present in the flow results. Emphasis is put on the robust quantitative analysis enabled by this scheme, with this work specifically focusing on comparing atomizer nozzle designs. The generalized finite difference (GFD) method is used to simulate a high pressure gas injected into a viscous liquid stream. The proposed scheme is applied to time-averaged results exclusively. The scheme is used to evaluate both the surface and volume features of the fluid system. Due to the better recovery of small-scale features using the proposed scheme, the benefits of post-processing multiphase surface information rather than fluid volume information was shown. While the volume information lacks the fine-scale details of the surface information, the duality between interfaces and fluid volumes leads to similar trends related to the large-scale spatial structure recovered from both surface- and volume-based data sets.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020054

Authors: Abayomi Adewale Akinwande Dimitry Moskovskikh Elena Romanovskaia Oluwatosin Abiodun Balogun J. Pradeep Kumar Valentin Romanovski

Recent studies have shown the benefits of utilizing ceramic particles as reinforcement in metal alloys; nevertheless, certain drawbacks, including loss of ductility, embrittlement, and decreases in toughness, have been noted. For the objective of obtaining balanced performance, experts have suggested the addition of metal particles as supplement to the ceramic reinforcement. Consequently, high-performance metal hybrid composites have been developed. However, achieving the optimal mix for the reinforcement combination with regards to the optimal performance of developed composite remains a challenge. This research aimed to determine the optimal mixture of Al50Cu10Sn5Mg20Zn10Ti5 lightweight high-entropy alloy (LHEA), B4C, and ZrO2 for the fabrication of trihybrid titanium composites via direct laser deposition. A mixture design was involved in the experimental design, and experimental data were modeled and optimized to achieve the optimal performance of the trihybrid composite. The ANOVA, response surface plots, and ternary maps analyses of the experimental results revealed that various combinations of reinforcement particles displayed a variety of response trends. Moreover, the analysis showed that these reinforcements significantly contributed to the magnitudes and trends of the responses. The generated models were competent for predicting response, and the best formulation consisted of 8.4% LHEA, 1.2% B4C, and 2.4% ZrO2.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020053

Authors: Martin Philip Venter Izak Johannes Joubert

Soft robotics is an emerging field that leverages the compliant nature of materials to control shape and behaviour. However, designing soft robots presents a challenge, as they do not have discrete points of articulation and instead articulate through deformation in whole regions of the robot. This results in a vast, unexplored design space with few established design methods. This paper presents a practical generative design process that combines the Encapsulation, Syllabus, and Pandamonium method with a reduced-order model to produce results comparable to the existing state-of-the-art in reduced design time while including the human designer meaningfully in the design process and facilitating the inclusion of other numerical techniques such as Markov chain Monte Carlo methods. Using a combination of reduced-order models, L-systems, MCMC, curve matching, and optimisation, we demonstrate that our method can produce functional 2D articulating soft robot designs in less than 1 s, a significant reduction in design time compared to monolithic methods, which can take several days. Additionally, we qualitatively show how to extend our approach to produce more complex 3D robots, such as an articulating tentacle with multiple grippers.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020052

Authors: Kristina Laugksch Pieter Rousseau Ryno Laubscher

Physics-informed neural networks (PINNs) were developed to overcome the limitations associated with the acquisition of large training data sets that are commonly encountered when using purely data-driven machine learning methods. This paper proposes a PINN surrogate modeling methodology for steady-state integrated thermofluid systems modeling based on the mass, energy, and momentum balance equations, combined with the relevant component characteristics and fluid property relationships. The methodology is applied to two thermofluid systems that encapsulate the important phenomena typically encountered, namely: (i) a heat exchanger network with two different fluid streams and components linked in series and parallel; and (ii) a recuperated closed Brayton cycle with various turbomachines and heat exchangers. The results generated with the PINN models were compared to benchmark solutions generated via conventional, physics-based thermofluid process models. The largest average relative errors are 0.17% and 0.93% for the heat exchanger network and Brayton cycle, respectively. It was shown that the use of a hybrid Adam-TNC optimizer requires between 180 and 690 fewer iterations during the training process, thus providing a significant computational advantage over a pure Adam optimization approach. The resulting PINN models can make predictions 75 to 88 times faster than their respective conventional process models. This highlights the potential for PINN surrogate models as a valuable engineering tool in component and system design and optimization, as well as in real-time simulation for anomaly detection, diagnosis, and forecasting.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020051

Authors: Johannes C. Joubert Daniel N. Wilke Patrick Pizette

This paper presents a GPU-based, incompressible, multiphase generalized finite difference solver for simulating multiphase flow. The method includes a dampening scheme that allows for large density ratio cases to be simulated. Two verification studies are performed by simulating the relaxation of a square droplet surrounded by a light fluid and a bubble rising in a denser fluid. The scheme is also used to simulate the collision of binary droplets at moderate Reynolds numbers (250&ndash;550). The effects of the surface tension and density ratio are explored in this work by considering cases with Weber numbers of 8 and 180 and density ratios of 2:1 and 1000:1. The robustness of the multiphase scheme is highlighted when resolving thin fluid structures arising in both high and low density ratio cases at We = 180.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020050

Authors: Vuyo T. Hashe Thokozani J. Kunene

Hydrocyclones are devices used in numerous areas of the chemical, food, and mineral industries to separate fine particles. A hydrocyclone with a diameter of d50 mm was modeled using the commercial Simcenter STAR-CCM+13 computational fluid dynamics (CFD) simulation package. The numerical methods confirmed the results of the different parameters, such as the properties of the volume fraction, based on CFD simulations. Reynolds Stress Model (RSM) and the combined technique of volume of fluid (VOF) and discrete element model (DEM) for water and air models were selected to evaluate semi-implicit pressure-linked equations and combine the momentum with continuity laws to obtain derivatives of the pressure. The targeted particle sizes were in a range of 8&ndash;100 microns for a dewatering application. The depth of the vortex finder was varied to 20 mm, 30 mm, and 35 mm to observe the effects of pressure drop and separation efficiency. The split water ratio increased toward a 50% split of overflow and underflow rates as the length of the vortex finder increased. It results in better particle separation when there is a high injection rate at the inlet. The tangential and axial velocities increased as the vortex finder length increased. As the depth of the vortex finder length increased, the time for particle re-entrainment into the underflow stream increased, and the separation efficiency improved.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020049

Authors: Pertti Mattila

Let A and B be Borel subsets of the Euclidean n-space with&nbsp;dimA+dimB&gt;n. This is a survey on the following question: what can we say about the Hausdorff dimension of the intersections&nbsp;A&cap;(g(B)+z)&nbsp;for generic orthogonal transformations g and translations by z?

]]>Mathematical and Computational Applications doi: 10.3390/mca28020048

Authors: Himani Sharma Munish Kansal Ramandeep Behl

We propose a new optimal iterative scheme without memory free from derivatives for solving non-linear equations. There are many iterative schemes existing in the literature which either diverge or fail to work when f&prime;(x)=0. However, our proposed scheme works even in these cases. In addition, we extended the same idea for iterative methods with memory with the help of self-accelerating parameters estimated from the current and previous approximations. As a result, the order of convergence increased from four to seven without the addition of any further functional evaluation. To confirm the theoretical results, numerical examples and comparisons with some of the existing methods are included which reveal that our scheme is more efficient than the existing schemes. Furthermore, basins of attraction are also included to describe a clear picture of the convergence of the proposed method as well as some of the existing methods.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020047

Authors: Philip Frederik Ligthart Martin Philip Venter

This paper demonstrates the effectiveness of a hierarchical design framework in developing environment-specific behaviour for fluid-actuated soft robots. Our proposed framework employs multi-step optimisation and reduced-order modelling to reduce the computational expense associated with simulating non-linear materials used in the design process. Specifically, our framework requires the designer to make high-level decisions to simplify the optimisations, targeting simple objectives in earlier steps and more complex objectives in later steps. We present a case study, where our proposed framework is compared to a conventional direct design approach for a simple 2D design. A soft pneumatic bending actuator was designed that is able to perform asymmetrical motion when actuated cyclically. Our results show that the hierarchical framework can find almost 2.5 times better solutions in less than 3% of the time when compared to a direct design approach.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020046

Authors: Rhoda Ngira Aduke Martin P. Venter Corné J. Coetzee

Corrugated paperboard is a sandwich structure composed of wavy paper (fluting) bonded between two flat paper sheets (liners). The analysis of an entire package using three-dimensional numerical finite element models is computationally expensive due to the waved geometry of the board that requires the use of a relatively large number of elements in a simulation. Because of this, homogenisation approaches are used to evaluate equivalent homogenous models with similar material properties. These techniques have been successfully implemented by various researchers to evaluate the strength of corrugated paperboard. However, studies analysing the various homogenisation techniques and their ranges of applicability are limited. This study analyses the application of three homogenisation techniques: classical laminate plate theory, first-order shear deformation theory and deformation energy equivalence method in the evaluation of effective elastic material properties. In addition, inverse analysis has been applied to determine the effective properties of the board. Finite element models have been used to evaluate the accuracy of the three homogenisation techniques in comparison to the inverse method in modelling four-point bending tests and the results reported.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020045

Authors: Anku Mona Narang Vinay Kanwar

In this paper, a new one-parameter class of fixed point iterative method is proposed to approximate the fixed points of contractive type mappings. The presence of an arbitrary parameter in the proposed family increases its interval of convergence. Further, we also propose new two-step and three-step fixed point iterative schemes. We also discuss the stability, strong convergence and fastness of the proposed methods. Furthermore, numerical experiments are performed to check the applicability of the new methods, and these have been compared with well-known similar existing methods in the literature.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020044

Authors: Desejo Filipeson Sozinando Bernard Xavier Tchomeni Alfayo Anyika Alugongo

Diagnosis of faults in a rotor system operating in a fluid is a complex task in the field of rotating machinery. In an ideal scenario, a forced shutdown due to rotor-stator contact failure would necessitate the replacement of the rotor or stator. However, factors such as time constraints, economic considerations, and the aging of infrastructure make it imprudent to abruptly shut down machinery that can still be safe to operate. The purpose of this paper is to present an experimental study that validates the theoretical results of the dynamic behavior and friction detection using the wavelet synchrosqueezing transformation (WSST) method for recurrent rotor-stator contacts in a fluid environment, as presented in a previous study. The investigation focused on the analysis of whirl orbits, shaft deflection, and fluctuation frequency during passage through critical speeds. The WSST method was used to decompose the dynamic responses of the rotor in the supercritical speed zone into several supercomponents. The variation of the high-frequency component was studied based on the fluctuation of the instantaneous frequency (IF) technique. Additionally, the fast Fourier transform (FFT) method, in conjunction with the WSST technique, was used to calculate the variation in the amplitude of high-order frequencies in the vibration signal spectrum. The experimental study revealed that the split in resonance caused by rubbing effects is reduced when the rotor and stator interact with an inviscid fluid. However, despite the effects of elasticity and fluid boundaries generating self-excitation at low frequencies and uneven motion due to stator clearance, the experimental results were consistent with the theoretical analysis, demonstrating the effectiveness of the contact detection method based on WSST.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020043

Authors: Mopeli Khama Quinn Reynolds

Metallurgical processes are characterized by a complex interplay of heat and mass transfer, momentum transfer, and reaction kinetics, and these interactions play a crucial role in reactor performance. Integrating chemistry and transport results in stiff and non-linear equations and longer time and length scales, which ultimately leads to a high computational expense. The current study employs the OpenFOAM solver based on a fictitious domain method to analyze gas-solid reactions in a porous medium using hydrogen as a reducing agent. The reduction of oxides with hydrogen involves the hierarchical phenomena that influence the reaction rates at various temporal and spatial scales; thus, multi-scale models are needed to bridge the length scale from micro-scale to macro-scale accurately. As a first step towards developing such capabilities, the current study analyses OpenFOAM reacting flow methods in cases related to hydrogen reduction of iron and manganese oxides. Since reduction of the oxides of interest with hydrogen requires significant modifications to the current industrial processes, this model can aid in the design and optimization. The model was verified against experimental data and the dynamic features of the porous medium observed as the reaction progresses is well captured by the model.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020042

Authors: Kwanda Mercury Dlamini Vuyo Terrence Hashe Thokozani Justin Kunene

The study numerically investigated the noise dissipation, cavitation, output power, and energy produced by marine propellers. A Ffowcs Williams&ndash;Hawkings (FW&ndash;H) model was used to determine the effects of three different marine propellers with three to five blades and a fixed advancing ratio. The large-eddy Simulations model best predicted the turbulent structures&rsquo; spatial and temporal variation, which would better illustrate the flow physics. It was found that a high angle of incidence between the blade&rsquo;s leading edge and the water flow direction typically causes the hub vortex to cavitate. The roll-up of the cavitating tip vortex was closely related to propeller noise. The five-blade propeller was quieter under the same dynamic conditions, such as the advancing ratio, compared to three- or four-blade propellers.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020041

Authors: Anshika Garg Shubham Gupta Nitesh Tewari Sukeshana Srivastav Arnab Chanda

Traumatic dental injuries (TDI) are frequent among individuals of all ages, with a prevalence ranging from 12&ndash;22%, with crown and crown&ndash;root fractures being the most common. Fragment reattachment using light-cured nanocomposites is the recommended method for the management of these fractures. Though there are several clinical studies that have assessed the efficacy of such materials, an in-silico characterization of the effects of traumatic forces on the re-attached fragments has never been performed. Hence, this study aimed to evaluate the efficacy of various adhesive materials in crown and crown&ndash;root reattachments through computational modelling. A full-scale permanent maxillary anterior tooth model was developed by segmenting 3D scanned cone beam computed tomography (CBCT) images of the pulp, root, and enamel precisely. The full-scale 3D tooth model was then subjected to a novel numerical cutting operation to describe the crown and crown&ndash;root fractures. The fractured tooth models were then filled computationally with three commonly used filler (or adhesive) materials, namely flowable composite, resin cement, and resin adhesive, and subjected to masticatory and traumatic loading conditions. The flowable composite demonstrated a statistically significant difference and the lowest produced stresses when subjected to masticatory loading. Resin cement demonstrated reduced stress values for crown&ndash;root fractures that were masticatory loaded after being reattached using adhesive materials. During traumatic loading, resin cement demonstrated lower displacements and stress values across both fractures. The novel findings reported in this study are anticipated to assist dentists in selecting the most appropriate adhesive materials that induce the least stress on the reattached tooth when subjected to second trauma, for both crown and crown&ndash;root fractures.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020040

Authors: Carl-Hein Visser Gerhard Venter Melody Neaves

When performing a digital image correlation (DIC) measurement, multi-camera stereo-DIC is generally preferred over single-camera 2D-DIC. Unlike 2D-DIC, stereo-DIC is able to minimise the in-plane strain error that results from out-of-plane motion. This makes 2D-DIC a less viable alternative for strain measurements than stereo-DIC, despite being less financially and computationally expensive. This work, therefore, proposes a strain-gauge-based method for the compensation of errors from out-of-plane motion in 2D-DIC strain measurements on planar specimens. The method was first developed using equations for the theoretical strain error from out-of-plane motions in 2D-DIC and was then applied experimentally in tensile tests to two different dog-bone specimen geometries. The compensation method resulted in a clear reduction in the strain error in 2D-DIC. The strain-gauge-based method thus improves the accuracy of a 2D-DIC measurement, making it a more viable option for performing full-field strain measurements and providing a possible alternative in cases where stereo-DIC is not practical or is unavailable.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020039

Authors: Jahnavi Merupula V. S. Vaidyanathan Christophe Chesneau

Regression models in which the response variable has a compound distribution have applications in actuarial science. For example, the aggregate claim amount in a vehicle insurance portfolio can be modeled using a compound Poisson distribution. In this paper, we propose a regression model, wherein the response variable is assumed to have a compound Conway&ndash;Maxwell&ndash;Poisson (CMP) distribution. This distribution is a parsimonious two-parameter Poisson distribution that accounts for both over- and under-dispersed count data, making it more suitable for application in various fields. A two-part methodology in the framework of a generalized linear model is proposed to estimate the parameters. Additionally, a method to obtain the prediction interval of the response variable is developed. The workings of the proposed methodology are illustrated through simulated data. An application of the compound CMP regression model to real-life vehicle insurance claims data is presented.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020038

Authors: Manuel Vargas-Martínez Nelson Rangel-Valdez Eduardo Fernández Claudia Gómez-Santillán María Lucila Morales-Rodríguez

Simulated annealing is a metaheuristic that balances exploration and exploitation to solve global optimization problems. However, to deal with multi- and many-objective optimization problems, this balance needs to be improved due to diverse factors such as the number of objectives. To deal with this issue, this work proposes MOSA/D, a hybrid framework for multi-objective simulated annealing based on decomposition and evolutionary perturbation functions. According to the literature, the decomposition strategy allows diversity in a population while evolutionary perturbations add convergence toward the Pareto front; however, a question should be asked: What is the effect of such components when included as part of a multi-objective simulated annealing design? Hence, this work studies the performance of the MOSA/D framework considering in its implementation two widely used perturbation operators: classical genetic operators and differential evolution. The proposed algorithms are MOSA/D-CGO, based on classical genetic operators, and MOSA/D-DE, based on differential evolution operators. The main contribution of this work is the performance analysis of MOSA/D using both perturbation operators and identifying the one most suitable for the framework. The approaches were tested using DTLZ on two and three objectives and CEC2009 benchmarks on two, three, five, and ten objectives; the performance analysis considered diversity and convergence measured through the hypervolume (HV) and inverted generational distance (IGD) indicators. The results pointed out that there is a promising improvement in performance in favor of MOSA/D-DE.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020037

Authors: Aleksandr N. Rozhkov Vera V. Galishnikova

Building information systems use topological tables to implement the transition from two-dimensional line drawings of the geometry of buildings to digital three-dimensional models of linear complexes. The topological elements of the complex are named and the topological relations of the complex are described by arranging the element names in topological tables. The efficient construction and modification of topological tables for complete buildings is investigated. The topology of a linear complex with nodes, edges, faces, and cells is described with 12 tables. Three of the tables of a complex are independent of each other and form a basis for the construction of the other tables. A highly efficient construction algorithm with complexity O (number of cells) for typical buildings with an approximately constant number of edges per face and faces per cell of is presented. In practice, building designs and their digital models are frequently modified. A modification algorithm is presented, whose complexity equals that of the construction algorithm. Examples illustrate that the efficient algorithms permit the replacement of the conventional focus on the topology of building components by a focus on the topology of the entire building. A set of properties of the original, which are not explicitly described by the topological tables, for example, the orientation of surfaces and multiply connected domains, are analyzed in the paper. An overview of the research dealing with the topological attributes that are not contained in topological tables concludes the paper.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020036

Authors: Hector Ascencion-Mestiza Serguei Maximov Efrén Mezura-Montes Juan Carlos Olivares-Galvan Rodrigo Ocon-Valdez Rafael Escarela-Perez

The conventional methods of parameter estimation in transformers, such as the open-circuit and short-circuit tests, are not always available, especially when the transformer is already in operation and its disconnection is impossible. Therefore, alternative (non-interruptive) methods of parameter estimation have become of great importance. In this work, no-interruption, transformer equivalent circuit parameter estimation is presented using the following metaheuristic optimization methods: the genetic algorithm (GA), particle swarm optimization (PSO) and the gravitational search algorithm (GSA). These algorithms provide a maximum average error of 12%, which is twice as better as results found in the literature for estimation of the equivalent circuit parameters in transformers at a frequency of 50 Hz. This demonstrates that the proposed GA, PSO and GSA metaheuristic optimization methods can be applied to estimate the equivalent circuit parameters of single-phase distribution and power transformers with a reasonable degree of accuracy.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020035

Authors: Enrique Naredo Candelaria Sansores Flaviano Godinez Francisco López Paulo Urbano Leonardo Trujillo Conor Ryan

Robotics technology has made significant advancements in various fields in industry and society. It is clear how robotics has transformed manufacturing processes and increased productivity. Additionally, navigation robotics has also been impacted by these advancements, with investors now investing in autonomous transportation for both public and private use. This research aims to explore how training scenarios affect the learning process for autonomous navigation tasks. The primary objective is to address whether the initial conditions (learning cases) have a positive or negative impact on the ability to develop general controllers. By examining this research question, the study seeks to provide insights into how to optimize the training process for autonomous navigation tasks, ultimately improving the quality of the controllers that are developed. Through this investigation, the study aims to contribute to the broader goal of advancing the field of autonomous navigation and developing more sophisticated and effective autonomous systems. Specifically, we conducted a comprehensive analysis of a particular navigation environment using evolutionary computing to develop controllers for a robot starting from different locations and aiming to reach a specific target. The final controller was then tested on a large number of unseen test cases. Experimental results provide strong evidence that the initial selection of the learning cases plays a role in evolving general controllers. This work includes a preliminary analysis of a specific set of small learning cases chosen manually, provides an in-depth analysis of learning cases in a particular navigation task, and develops a tool that shows the impact of the selected learning cases on the overall behavior of a robot&rsquo;s controller.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020034

Authors: Carlos Coello Erik Goodman Kaisa Miettinen Dhish Saxena Oliver Schütze Lothar Thiele

Kalyanmoy Deb was born in Udaipur, Tripura, the smallest state of India at the time, in 1963 [...]

]]>Mathematical and Computational Applications doi: 10.3390/mca28020033

Authors: Li Dai Mi-Da Cui Xiao-Xiang Cheng

To rigorously evaluate the health of a steel bridge subjected to vehicle-induced fatigue, both a detailed numerical model and effective fatigue analysis methods are needed. In this paper, the process for establishing the structural health monitoring (SHM)-oriented finite element (FE) model and assessing the vehicle-induced fatigue damage is presented for a large, specially shaped steel arch bridge. First, the bridge is meticulously modeled using multiple FEs to facilitate the exploration of the local structural behavior. Second, manual tuning and model updating are conducted according to the modal parameters measured at the bridge&rsquo;s location. Since the numerical model comprises a large number of FEs, two surrogate-model-based methods are employed to update the model. Third, the established models are validated by using them to predict the structure&rsquo;s mode shapes and the actual structural behavior for the case in which the whole bridge is subjected to static vehicle loads. Fourth, using the numerical model, a new fatigue analysis method based on the high-circle fatigue damage accumulation theory is employed to further analyze the vehicle-induced fatigue damage to the bridge. The results indicate that manual tuning and model updating are indispensable for SHM-oriented FE models with erroneous configurations, and one surrogate-model-based model updating method is effective. In addition, it is shown that the fatigue analysis method based on the high-circle fatigue damage accumulation theory is applicable to real-world engineering cases.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020032

Authors: Bevan I. Smith Charles Chimedza Jacoba H. Bührmann

This study critically evaluates a recent machine learning method called the X-Learner, that aims to estimate treatment effects by predicting counterfactual quantities. It uses information from the treated group to predict counterfactuals for the control group and vice versa. The problem is that studies have either only been applied to real world data without knowing the ground truth treatment effects, or have not been compared with the traditional regression methods for estimating treatment effects. This study therefore critically evaluates this method by simulating various scenarios that include observed confounding and non-linearity in the data. Although the regression X-Learner performs just as well as the traditional regression model, the other base learners performed worse. Additionally, when non-linearity was introduced into the data, the results of the X-Learner became inaccurate.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020031

Authors: Sandra Ferreira

The rapid advances in modeling research have created new challenges and opportunities for statisticians [...]

]]>Mathematical and Computational Applications doi: 10.3390/mca28020030

Authors: Yang Zhou Xiaofu Ji

This paper is concerned with the problem of static output feedback control for a class of continuous-time nonlinear time-delay semi-Markov jump systems with incremental quadratic constraints. For a class of time-delay semi-Markov jump systems satisfying incremental quadratic constrained nonlinearity, an appropriate mode-dependent Lyapunov&ndash;Krasovskii functional is constructed. Based on the matrix transformation, projection theorem and convex set principle, the mode-dependent static output feedback control laws are designed. The feedback control law is given in the form of a linear matrix inequality, which is convenient for a numerical solution. Finally, two practical examples are given to illustrate the effectiveness and superiority of the proposed method.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020029

Authors: José Alfredo Brambila-Hernández Miguel Ángel García-Morales Héctor Joaquín Fraire-Huacuja Eduardo Villegas-Huerta Armando Becerra-del-Ángel

This paper proposes a hybrid harmony search algorithm that incorporates a method of reinitializing harmonies memory using a particle swarm optimization algorithm with an improved opposition-based learning method (IOBL) to solve continuous optimization problems. This method allows the algorithm to obtain better results by increasing the search space of the solutions. This approach has been validated by comparing the performance of the proposed algorithm with that of a state-of-the-art harmony search algorithm, solving fifteen standard mathematical functions, and applying the Wilcoxon parametric test at a 5% significance level. The state-of-the-art algorithm uses an opposition-based improvement method (IOBL). Computational experiments show that the proposed algorithm outperforms the state-of-the-art algorithm. In quality, it is better in fourteen of the fifteen instances, and in efficiency is better in seven of fifteen instances.

]]>Mathematical and Computational Applications doi: 10.3390/mca28020028

Authors: Gurjeet Singh Sonia Bhalla Ramandeep Behl

Problems such as population growth, continuous stirred tank reactor (CSTR), and ideal gas have been studied over the last four decades in the fields of medical science, engineering, and applied science, respectively. Some of the main motivations were to understand the pattern of such issues and how to obtain the solution to them. With the help of applied mathematics, these problems can be converted or modeled by nonlinear expressions with similar properties. Then, the required solution can be obtained by means of iterative techniques. In this manuscript, we propose a new iterative scheme for computing multiple roots (without prior knowledge of multiplicity m) based on multiplicative calculus rather than standard calculus. The structure of our scheme stands on the well-known Schr&ouml;der method and also retains the same convergence order. Some numerical examples are tested to find the roots of nonlinear equations, and results are found to be competent compared with ordinary derivative methods. Finally, the new scheme is also analyzed by the basin of attractions that also supports the theoretical aspects.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010027

Authors: Ricardo Pérez-Rodríguez Sergio Frausto-Hernández

The truck and trailer routing problem (TTRP) has been widely studied under different approaches. This is due to its practical characteristic that makes its research interesting. The TTRP continues to be attractive to developing new evolutionary algorithms. This research details a new estimation of the distribution algorithm coupled with a radial probability function from hydrogen. Continuous values are used in the solution representation, and every value indicates, in a hydrogen atom, the distance between the electron and the core. The key point is to exploit the radial probability distribution to construct offspring and to tackle the drawbacks of the estimation of distribution algorithms. Various instances and numerical experiments are presented to illustrate and validate this novel research. Based on the performance of the proposed scheme, we can make the conclusion that incorporating radial probability distributions helps to improve the estimation of distribution algorithms.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010026

Authors: Raktim Biswas Deepak Sharma

Multi-objective reliability-based design optimization (MORBDO) is an efficient tool for generating reliable Pareto-optimal (PO) solutions. However, generating such PO solutions requires many function evaluations for reliability analysis, thereby increasing the computational cost. In this paper, a single-loop multi-objective reliability-based design optimization formulation is proposed that approximates reliability analysis using Karush-Kuhn Tucker (KKT) optimality conditions. Further, chaos control theory is used for updating the point that is estimated through KKT conditions for avoiding any convergence issues. In order to generate the reliable point in the feasible region, the proposed formulation also incorporates the shifting vector approach. The proposed MORBDO formulation is solved using differential evolution (DE) that uses a heuristic convergence parameter based on hypervolume indicator for performing different mutation operators. DE incorporating the proposed formulation is tested on two mathematical and one engineering examples. The results demonstrate the generation of a better set of reliable PO solutions using the proposed method over the double-loop variant of multi-objective DE. Moreover, the proposed method requires 6&times;&ndash;377&times; less functional evaluations than the double-loop-based DE.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010025

Authors: Suleman Nasiru Abdul Ghaniyyu Abubakari Christophe Chesneau

The usefulness of (probability) distributions in the field of biomedical science cannot be underestimated. Hence, several distributions have been used in this field to perform statistical analyses and make inferences. In this study, we develop the arctan power (AP) distribution and illustrate its application using biomedical data. The distribution is flexible in the sense that its probability density function exhibits characteristics such as left-skewedness, right-skewedness, and J and reversed-J shapes. The characteristic of the corresponding hazard rate function also suggests that the distribution is capable of modeling data with monotonic and non-monotonic failure rates. A bivariate extension of the AP distribution is also created to model the interdependence of two random variables or pairs of data. The application reveals that the AP distribution provides a better fit to the biomedical data than other existing distributions. The parameters of the distribution can also be fairly accurately estimated using a Bayesian approach, which is also elaborated. To end the study, the quantile and modal regression models based on the AP distribution provided better fits to the biomedical data than other existing regression models.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010024

Authors: Michael O. Opoku Eric N. Wiah Eric Okyere Albert L. Sackitey Emmanuel K. Essel Stephen E. Moore

We present a Caputo fractional order mathematical model that describes the cellular infection of the Hepatitis B virus and the immune response of the body with Holling type II functional response. We study the existence of unique positive solutions and the local and global stability of virus-free and endemic equilibria. Finally, we present numerical results using the Adam-type predictor&ndash;corrector iterative scheme.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010023

Authors: Gurjeet Singh Sonia Bhalla Ramandeep Behl

Grossman and Katz (five decades ago) suggested a new definition of differential and integral calculus which utilizes the multiplicative and division operator as compared to addition and subtraction. Multiplicative calculus is a vital part of applied mathematics because of its application in the areas of biology, science and finance, biomedical, economic, etc. Therefore, we used a multiplicative calculus approach to develop a new fourth-order iterative scheme for multiple roots based on the well-known King&rsquo;s method. In addition, we also propose a detailed convergence analysis of our scheme with the help of a multiplicative calculus approach rather than the normal one. Different kinds of numerical comparisons have been suggested and analyzed. The obtained results (from line graphs, bar graphs and tables) are very impressive compared to the earlier iterative methods of the same order with the ordinary derivative. Finally, the convergence of our technique is also analyzed by the basin of attractions, which also supports the theoretical aspects.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010022

Authors: Junya Sato

Many approaches have been developed to solve the hand&ndash;eye calibration problem. The traditional approach involves a precise mathematical model, which has advantages and disadvantages. For example, mathematical representations can provide numerical and quantitative results to users and researchers. Thus, it is possible to explain and understand the calibration results. However, information about the end-effector, such as the position attached to the robot and its dimensions, is not considered in the calibration process. If there is no CAD model, additional calibration is required for accurate manipulation, especially for a handmade end-effector. A neural network-based method is used as the solution to this problem. By training a neural network model using data created via the attached end-effector, additional calibration can be avoided. Moreover, it is not necessary to develop a precise and complex mathematical model. However, it is difficult to provide quantitative information because a neural network is a black box. Hence, a method with both advantages is proposed in this study. A mathematical model was developed and optimized using the data created by the attached end-effector. To acquire accurate data and evaluate the calibration results, a tablet computer was utilized. The established method achieved a mean positioning error of 1.0 mm.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010021

Authors: Faisal Salah Abdelmgid O. M. Sidahmed K. K. Viswanathan

In this paper, the numerical solutions for magneto-hydrodynamic Hiemenz fluid over a nonlinear stretching sheet and the Brownian motion effects of nanoparticles through a porous medium with chemical reaction and radiation are studied. The repercussions of thermophoresis and mass transfer at the stagnation point flow are discussed. The plate progresses in the contrary direction or in the free stream orientation. The underlying PDEs are reshaped into a set of ordinary differential equations employing precise transformation. They are addressed numerically using the successive linearization method, which is an efficient systematic process. The main goal of this study is to compare the solutions obtained using the successive linearization method to solve the velocity and temperature equations in the presence of m changes, thereby demonstrating its accuracy and suitability for solving nonlinear differential equations. For comparison, tables containing the results are presented. This contrast is significant because it demonstrates the accuracy with which a set of nonlinear differential equations can be solved using the successive linearization method. The resulting solution is examined and discussed with respect to a number of engineering parameters. Graphs exemplify the simulation of distinct parameters that govern the motion factors.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010020

Authors: S. Divya S. Eswaramoorthi Karuppusamy Loganathan

The main goal of the current research is to investigate the numerical computation of Ag/Al2O3 nanofluid over a Riga plate with injection/suction. The energy equation is formulated using the Cattaneo&ndash;Christov heat flux, non-linear thermal radiation, and heat sink/source. The leading equations are non-dimensionalized by employing the suitable transformations, and the numerical results are achieved by using the MATLAB bvp4c technique. The fluctuations of fluid flow and heat transfer on porosity, Forchheimer number, radiation, suction/injection, velocity slip, and nanoparticle volume fraction are investigated. Furthermore, the local skin friction coefficient (SFC), and local Nusselt number (LNN) are also addressed. Compared to previously reported studies, our computational results exactly coincided with the outcomes of the previous reports. We noticed that the Forchheimer number, suction/injection, slip, and nanoparticle volume fraction factors slow the velocity profile. We also noted that with improving rates of thermal radiation and convective heating, the heat transfer gradient decreases. The 40% presence of the Hartmann number leads to improved drag force by 14% and heat transfer gradient by 0.5%. The 20% presence of nanoparticle volume fraction leads to a decrement in heat transfer gradient for 21% of Ag nanoparticles and 18% of Al2O3 nanoparticles.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010019

Authors: Nat Promma Nawinda Chutsagulprom

The primary objective of this article is to present an adaptive parameter VAR-KF technique (APVAR-KF) to forecast stock market performance and macroeconomic factors. The method exploits a vector autoregressive model as a system identification technique, and the Kalman filter is served as a recursive state parameter estimation tool. A further development was designed by incorporating the GARCH model to quantify an automatic observation covariance matrix in the Kalman filter step. To verify the efficiency of our proposed method, we conducted an experimental simulation applied to the main stock exchange index, real effective exchange rate and consumer price index of Thailand and Indonesia from January 1997 to May 2021. The APVAR-KF method is generally shown to have a superior performance relative to the conventional VAR(1) model and the VAR-KF model with constant parameters.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010018

Authors: Maria Immaculate Joyce Jagan Kandasamy Sivasankaran Sivanandam

Currently, the efficiency of heat exchange is not only determined by enhancements in the rate of heat transfer but also by economic and accompanying considerations. Responding to this demand, many scientists have been involved in improving heat transfer performance, which is referred to as heat transfer enhancement, augmentation, or intensification. This study deals with the influence on hybrid Cu&ndash;Al2CO3/water nanofluidic flows on a porous stretched sheet of velocity slip, convective boundary conditions, Joule heating, and chemical reactions using an adapted Tiwari&ndash;Das model. Nonlinear fundamental equations such as continuity, momentum, energy, and concentration are transmuted into a non-dimensional ordinary nonlinear differential equation by similarity transformations. Numerical calculations are performed using HAM and the outcomes are traced on graphs such as velocity, temperature, and concentration. Temperature and concentration profiles are elevated as porosity is increased, whereas velocity is decreased. The Biot number increases the temperature profile. The rate of entropy is enhanced as the Brinkman number is raised. A decrease in the velocity is seen as the slip increases.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010017

Authors: António Gaspar-Cunha Paulo Costa Francisco Monaco Alexandre Delbem

Solving real-world multi-objective optimization problems using Multi-Objective Optimization Algorithms becomes difficult when the number of objectives is high since the types of algorithms generally used to solve these problems are based on the concept of non-dominance, which ceases to work as the number of objectives grows. This problem is known as the curse of dimensionality. Simultaneously, the existence of many objectives, a characteristic of practical optimization problems, makes choosing a solution to the problem very difficult. Different approaches are being used in the literature to reduce the number of objectives required for optimization. This work aims to propose a machine learning methodology, designated by FS-OPA, to tackle this problem. The proposed methodology was assessed using DTLZ benchmarks problems suggested in the literature and compared with similar algorithms, showing a good performance. In the end, the methodology was applied to a difficult real problem in polymer processing, showing its effectiveness. The algorithm proposed has some advantages when compared with a similar algorithm in the literature based on machine learning (NL-MVU-PCA), namely, the possibility for establishing variable&ndash;variable and objective&ndash;variable relations (not only objective&ndash;objective), and the elimination of the need to define/chose a kernel neither to optimize algorithm parameters. The collaboration with the DM(s) allows for the obtainment of explainable solutions.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010016

Authors: Gianluigi Rozza Oliver Schütze Nicholas Fantuzzi

This Special Issue comprises the first collection of papers submitted by the Editorial Board Members (EBMs) of the journal Mathematical and Computational Applications (MCA), as well as outstanding scholars working in the core research fields of MCA [...]

]]>Mathematical and Computational Applications doi: 10.3390/mca28010015

Authors: MCA Editorial Office MCA Editorial Office

High-quality academic publishing is built on rigorous peer review [...]

]]>Mathematical and Computational Applications doi: 10.3390/mca28010014

Authors: Xilu Wang Yaochu Jin

Particle filters, also known as sequential Monte Carlo (SMC) methods, constitute a class of importance sampling and resampling techniques designed to use simulations to perform on-line filtering. Recently, particle filters have been extended for optimization by utilizing the ability to track a sequence of distributions. In this work, we incorporate transfer learning capabilities into the optimizer by using particle filters. To achieve this, we propose a novel particle-filter-based multi-objective optimization algorithm (PF-MOA) by transferring knowledge acquired from the search experience. The key insight adopted here is that, if we can construct a sequence of target distributions that can balance the multiple objectives and make the degree of the balance controllable, we can approximate the Pareto optimal solutions by simulating each target distribution via particle filters. As the importance weight updating step takes the previous target distribution as the proposal distribution and takes the current target distribution as the target distribution, the knowledge acquired from the previous run can be utilized in the current run by carefully designing the set of target distributions. The experimental results on the DTLZ and WFG test suites show that the proposed PF-MOA achieves competitive performance compared with state-of-the-art multi-objective evolutionary algorithms on most test instances.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010013

Authors: Amar Debbouche Bhaskar Sundara Vadivoo Vladimir E. Fedorov Valery Antonov

We establish a class of nonlinear fractional differential systems with distributed time delays in the controls and impulse effects. We discuss the controllability criteria for both linear and nonlinear systems. The main results required a suitable Gramian matrix defined by the Mittag&ndash;Leffler function, using the standard Laplace transform and Schauder fixed-point techniques. Further, we provide an illustrative example supported by graphical representations to show the validity of the obtained abstract results.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010012

Authors: Santiago Sinisterra-Sierra Salvador Godoy-Calderón Miriam Pescador-Rojas

Association rule mining plays a crucial role in the medical area in discovering interesting relationships among the attributes of a data set. Traditional association rule mining algorithms such as Apriori, FP growth, or Eclat require considerable computational resources and generate large volumes of rules. Moreover, these techniques depend on user-defined thresholds which can inadvertently cause the algorithm to omit some interesting rules. In order to solve such challenges, we propose an evolutionary multi-objective algorithm based on NSGA-II to guide the mining process in a data set composed of 15.5 million records with official data describing the COVID-19 pandemic in Mexico. We tested different scenarios optimizing classical and causal estimation measures in four waves, defined as the periods of time where the number of people with COVID-19 increased. The proposed contributions generate, recombine, and evaluate patterns, focusing on recovering promising high-quality rules with actionable cause&ndash;effect relationships among the attributes to identify which groups are more susceptible to disease or what combinations of conditions are necessary to receive certain types of medical care.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010011

Authors: Barry C. Arnold Bangalore G. Manjunath

It has been argued in Arnold and Manjunath (2021) that the bivariate pseudo-Poisson distribution will be the model of choice for bivariate data with one equidispersed marginal and the other marginal over-dispersed. This is due to its simple structure, straightforward parameter estimation and fast computation. In the current note, we introduce the effects of concomitant variables on the bivariate pseudo-Poisson parameters and explore the distributional and inferential aspects of the augmented models. We also include a small simulation study and an example of application to real-life data.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010010

Authors: Hao Wang Michael Emmerich André Deutz Víctor Adrián Sosa Hernández Oliver Schütze

Recently, the Hypervolume Newton Method (HVN) has been proposed as a fast and precise indicator-based method for solving unconstrained bi-objective optimization problems with objective functions. The HVN is defined on the space of (vectorized) fixed cardinality sets of decision space vectors for a given multi-objective optimization problem (MOP) and seeks to maximize the hypervolume indicator adopting the Newton&ndash;Raphson method for deterministic numerical optimization. To extend its scope to non-convex optimization problems, the HVN method was hybridized with a multi-objective evolutionary algorithm (MOEA), which resulted in a competitive solver for continuous unconstrained bi-objective optimization problems. In this paper, we extend the HVN to constrained MOPs with in principle any number of objectives. Similar to the original variant, the first- and second-order derivatives of the involved functions have to be given either analytically or numerically. We demonstrate the applicability of the extended HVN on a set of challenging benchmark problems and show that the new method can be readily applied to solve equality constraints with high precision and to some extent also inequalities. We finally use HVN as a local search engine within an MOEA and show the benefit of this hybrid method on several benchmark problems.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010009

Authors: Zakaria Yaagoub Karam Allali

A three-strain SEIR epidemic model with a vaccination strategy is suggested and studied in this work. This model is represented by a system of nine nonlinear ordinary differential equations that describe the interaction between susceptible individuals, strain-1-vaccinated individuals, strain-1-exposed individuals, strain-2-exposed individuals, strain-3-exposed individuals, strain-1-infected individuals, strain-2-infected individuals, strain-3-infected individuals, and recovered individuals. We start our analysis of this model by establishing the existence, positivity, and boundedness of all the solutions. In order to show global stability, the model has five equilibrium points: The first one stands for the disease-free equilibrium, the second stands for the strain-1 endemic equilibrium, the third one describes the strain-2 equilibrium, the fourth one represents the strain-3 equilibrium point, and the last one is called the total endemic equilibrium. We establish the global stability of each equilibrium point using some suitable Lyapunov function. This stability depends on the strain-1 reproduction number R01, the strain-2 basic reproduction number R02, and the strain-3 reproduction number R03. Numerical simulations are given to confirm our theoretical results. It is shown that in order to eradicate the infection, the basic reproduction numbers of all the strains must be less than unity.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010008

Authors: Guilherme Duarte Ana Neves António Ramos Silva

Thermography techniques are gaining popularity in structural integrity monitoring and analysis of mechanical systems&rsquo; behavior because they are contactless, non-intrusive, rapidly deployable, applicable to structures under harsh environments, and can be performed on-site. More so, the use of image optical techniques has grown quickly over the past several decades due to the progress in the digital camera, infrared camera, and computational power. This work focuses on thermoelastic stress analysis (TSA), and its main goal was to create a computational model based on the finite element method that simulates this technique, to evaluate and quantify how the changes in material properties, including orthotropic, affect the results of the stresses obtained with TSA. The numeric simulations were performed for two samples, compact and single lap joints. when comparing the numeric model developed with previous laboratory tests, the results showed a good representation of the stress test for both samples. The created model is applicable to various materials, including fiber-reinforced composites. This work also highlights the need to perform laboratory tests using anisotropic materials to better understand the TSA potential and improve the developed models.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010007

Authors: Juan F. Giraldo Victor M. Calo

We construct a stabilized finite element method for linear and nonlinear unsteady advection&ndash;diffusion&ndash;reaction equations using the method of lines. We propose a residual minimization strategy that uses an ad-hoc modified discrete system that couples a time-marching schema and a semi-discrete discontinuous Galerkin formulation in space. This combination delivers a stable continuous solution and an on-the-fly error estimate that robustly guides adaptivity at every discrete time. We show the performance of advection-dominated problems to demonstrate stability in the solution and efficiency in the adaptivity strategy. We also present the method&rsquo;s robustness in the nonlinear Bratu equation in two dimensions.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010006

Authors: Octavio Ramos-Figueroa Marcela Quiroz-Castellanos Efrén Mezura-Montes Nicandro Cruz-Ramírez

The Grouping Genetic Algorithm (GGA) is an extension to the standard Genetic Algorithm that uses a group-based representation scheme and variation operators that work at the group-level. This metaheuristic is one of the most used to solve combinatorial optimization grouping problems. Its optimization process consists of different components, although the crossover and mutation operators are the most recurrent. This article aims to highlight the impact that a well-designed operator can have on the final performance of a GGA. We present a comparative experimental study of different mutation operators for a GGA designed to solve the Parallel-Machine scheduling problem with unrelated machines and makespan minimization, which comprises scheduling a collection of jobs in a set of machines. The proposed approach is focused on identifying the strategies involved in the mutation operations and adapting them to the characteristics of the studied problem. As a result of this experimental study, knowledge of the problem-domain was gained and used to design a new mutation operator called 2-Items Reinsertion. Experimental results indicate that the state-of-the-art GGA performance considerably improves by replacing the original mutation operator with the new one, achieving better results, with an improvement rate of 52%.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010005

Authors: Adel M. Al-Mahdi Mohammad M. Al-Gharabli Maher Noor Johnson D. Audu

In this paper, we study the long-time behavior of a weakly dissipative viscoelastic equation with variable exponent nonlinearity of the form utt+&Delta;2u&minus;&int;0tg(t&minus;s)&Delta;u(s)ds+a|ut|n(&middot;)&minus;2ut&minus;&Delta;ut=0, where n(.) is a continuous function satisfying some assumptions and g is a general relaxation function such that g&prime;(t)&le;&minus;&xi;(t)G(g(t)), where &xi; and G are functions satisfying some specific properties that will be mentioned in the paper. Depending on the nature of the decay rate of g and the variable exponent n(.), we establish explicit and general decay results of the energy functional. We give some numerical illustrations to support our theoretical results. Our results improve some earlier works in the literature.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010004

Authors: Ram Krishna Agbotiname Lucky Imoize Rajveer Singh Yaduvanshi Harendra Singh Arun Kumar Rana Subhendu Kumar Pani

The dielectric resonator antenna (DRA) can be modeled as a series and parallel combination of electrical networks consisting of a resistor (R), inductor (L), and capacitor (C) to address peculiar challenges in antennas suitable for application in emerging wireless communication systems for higher frequency range. In this paper, a multi-stacked DRA has been proposed. The performance and characteristic features of the DRA have been analyzed by deriving the mathematical formulations for dynamic impedance, input impedance, admittance, bandwidth, and quality factor for fundamental and high-order resonant modes. Specifically, the performance of the projected multi-stacked DRA was analyzed in MATLAB and a high-frequency structure simulator (HFSS). Generally, results indicate that variation in the permittivity of substrates, such as high and low, can potentially increase and decrease the quality factor, respectively. In particular, the impedance, radiation fields and power flow have been demonstrated using the proposed multi-stacked electrical network of R, L, and C components coupled with a suitable transformer. Overall, the proposed multi-stacked DRA network shows an improved quality factor and selectivity, and bandwidth is reduced reasonably. The multi-stacked DRA network would find useful applications in radio frequency wireless communication systems. Additionally, for enhancing the impedance, BW of DRA a multi-stacked DRA is proposed by the use of ground-plane techniques with slots, dual-segment, and stacked DRA. The performance of multi-stacked DRA is improved by a factor of 10% as compared to existing models in terms of better flexibility, moderate gain, compact size, bandwidth, quality factor, resonant frequency, frequency impedance at the resonance frequency, and the radiation pattern with Terahertz frequency range.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010003

Authors: B. Rushi Kumar R. Vijayakumar A. Jancy Rani

This work analyses the effect of electromagnetic fields on cartilaginous cells in human joints and the nutrients that flow from the synovial fluid to the cartilage. The perturbation approach and the generalised dispersion model is used to solve the governing equation of momentum and mass transfer. The dispersion coefficient increases with dimensionless time. It aids in grasping the level of nutritional transport to the synovial joint. Low-molecular-weight solutes have a lower concentration distribution at the same depth in articular cartilage than high-molecular-weight solutes. Thus, diffusion dominates nutrition transport for low-molecular-weight solutes, whereas a mechanical pumping action dominates nutrition transport for high-molecular-weight solutes. The report says that the cells in the centre of the cartilage surface receive more nutrients during imbibition and exudation than the cells on the periphery, and the earliest indications of cartilage degradation emerge in the uninflected regions. As a result, cartilage nutrition is considered necessary to joint mobility. It also predicts that, as the viscoelastic parameter increases, the concentration in the articular cartilage diminishes, resulting in the cartilage cells receiving less nutrition, which might lead to harmful effects. The dispersion coefficient and mean concentration for distinct factors, such as the Hartmann number, porous parameter, and viscoelastic parameters of gel formation, have been computed and illustrated through graphics.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010002

Authors: Vinodh Srinivasa Reddy Jagan Kandasamy Sivasankaran Sivanandam

The current study used a novel Casson model to investigate hybrid Al2O3-Cu/Ethylene glycol nanofluid flow over a moving thin needle under MHD, Dufour&ndash;Soret effects, and thermal radiation. By utilizing the appropriate transformations, the governing partial differential equations are transformed into ordinary differential equations. The transformed ordinary differential equations are solved analytically using HAM. Furthermore, we discuss velocity profiles, temperature profiles, and concentration profiles for various values of governing parameters. Skin friction coefficient increases by upto 45% as the Casson parameter raised upto 20%, and the heat transfer rate also increases with the inclusion of nanoparticles. Additionally, local skin friction, a local Nusselt number, and a local Sherwood number for many parameters are entangled in this article.

]]>Mathematical and Computational Applications doi: 10.3390/mca28010001

Authors: Xiaofu Ji Xueqing Yan

The problem of finite-time static output feedback H&infin; control for a class of discrete-time singular Markov jump systems is studied in this paper. With the consideration of network transmission delay and event-triggered schemes, a closed-loop model of a discrete-time singular Markov jump system is established under the static output feedback control law, and the corresponding sufficient condition is given to guarantee this system will be regular, causal, finite-time bounded and satisfy the given H&infin; performance. Based on the matrix decomposition algorithm, the output feedback controller can be reduced to a feasible solution of a set of strict matrix inequalities. A numerical example is presented to show the effectiveness of the presented method.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060112

Authors: Ankur Sinha Jyrki Wallenius

Most of the practical applications that require optimization often involve multiple objectives. These objectives, when conflicting in nature, pose both optimization as well as decision-making challenges. An optimization procedure for such a multi-objective problem requires computing (computer-based search) and decision making to identify the most preferred solution. Researchers and practitioners working in various domains have integrated computing and decision-making tasks in several ways, giving rise to a variety of algorithms to handle multi-objective optimization problems. For instance, an a priori approach requires formulating (or eliciting) a decision maker&rsquo;s value function and then performing a one-shot optimization of the value function, whereas an a posteriori decision-making approach requires a large number of diverse Pareto-optimal solutions to be available before a final decision is made. Alternatively, an interactive approach involves interactions with the decision maker to guide the search towards better solutions (or the most preferred solution). In our tutorial and survey paper, we first review the fundamental concepts of multi-objective optimization. Second, we discuss the classic interactive approaches from the field of Multi-Criteria Decision Making (MCDM), followed by the underlying idea and methods in the field of Evolutionary Multi-Objective Optimization (EMO). Third, we consider several promising MCDM and EMO hybrid approaches that aim to capitalize on the strengths of the two domains. We conclude with discussions on important behavioral considerations related to the use of such approaches and future work.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060110

Authors: Manoj Kumar Narayanaswamy Jagan Kandasamy Sivasankaran Sivanandam

The impacts of Stefan blowing along with slip and Joule heating on hybrid nanofluid (HNF) flow past a shrinking cylinder are investigated in the presence of thermal radiation. Using the suitable transformations, the governing equations are converted into ODEs, and the MATLAB tool bvp4c is used to solve the resulting equations. As Stefan blowing increases, temperature and concentration profiles are accelerated but the velocity profile diminishes and also the heat transfer rate improves up to 25% as thermal radiation upsurges. The mass transfer rate diminishes as increasing Stefan blowing. The Sherwood number, the Nusselt number, and the skin friction coefficient are numerically tabulated and graphs are also plotted. The outcomes are conscientiously and thoroughly discussed.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060109

Authors: Xiaoqing Zhao Lei Pang Lianming Wang Sen Men Lei Yan

This paper aimed to combine hyperspectral imaging (378&ndash;1042 nm) and a deep convolutional neural network (DCNN) to rapidly and non-destructively detect and predict the viability of waxy corn seeds. Different viability levels were set by artificial aging (aging: 0 d, 3 d, 6 d, and 9 d), and spectral data for the first 10 h of seed germination were continuously collected. Bands that were significantly correlated (SC) with moisture, protein, starch, and fat content in the seeds were selected, and another optimal combination was extracted using a successive projection algorithm (SPA). The support vector machine (SVM), k-nearest neighbor (KNN), random forest (RF), and deep convolutional neural network (DCNN) approaches were used to establish the viability detection and prediction models. During detection, with the addition of different levels, the recognition effect of the first three methods decreased, while the DCNN method remained relatively stable (always above 95%). When using the previous 2.5 h data, the prediction accuracy rate was generally higher than the detection model. Among them, SVM + full band increased the most, while DCNN + full band was the highest, reaching 98.83% accuracy. These results indicate that the combined use of hyperspectral imaging technology and the DCNN method is more conducive to the rapid detection and prediction of seed viability.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060111

Authors: Alexia Yavicoli

In this article, we introduce a notion of size for sets, called the thickness, that can be used to guarantee that two Cantor sets intersect (the Gap Lemma) and show a connection among thickness, Schmidt games and patterns. We work mostly in the real line, but we also introduce the topic in higher dimensions.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060108

Authors: Johan Cillie Corné Coetzee

Finite element analysis (FEA) has been proven as a useful design tool to model corrugated paperboard boxes, and is capable of accurately predicting load capacity. The in-plane deformation, however, is usually significantly underpredicted. To investigate this discrepancy, a panel compression test jig, that implemented simply supported boundary conditions, was built to test individual panels. The panels were then modelled using non-linear FEA with a linear material model. The results show that the in-plane deformation was still underpredicted, but a general improvement was seen. Three discrepancies were identified. The first was that the panels showed an initial region of low stiffness that was not present in the FEA results. This was attributed to imperfections in the panels and jig. Secondly, the experimental results reported a lower stiffness than the FEA. Applying an initial imperfection in the shape of the first buckling mode shape was found to reduce the FEA stiffness. Thirdly, the panels showed a decrease in stiffness near failure, which was not seen in the FEA. A bi-linear material model was investigated and holds the potential to improve the results. Box compression tests were performed on a Regular Slotted Container (RSC) with the same dimensions as the tested panel. The box displaced 13.1 mm compared to 3.5 mm for the panel. There was an initial region of low stiffness, which accounted for 7 mm of displacement compared to 0.5 mm for the panels. Thus, box complexities such as horizontal creases should be included in finite element (FE) models to accurately predict the in-plane deformation, while a bi-linear (or any other non-linear) material model may be useful for panel compression.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060107

Authors: Maria Amélia R. Loja

This is the Special Issue &ldquo;Numerical and Symbolic Computation: Developments and Applications&mdash;2021&rdquo;, also available at the Special Issue website https://www [...]

]]>Mathematical and Computational Applications doi: 10.3390/mca27060106

Authors: Henrik Smedberg Carlos Alberto Barrera-Diaz Amir Nourmohammadi Sunith Bandaru Amos H. C. Ng

Current market requirements force manufacturing companies to face production changes more often than ever before. Reconfigurable manufacturing systems (RMS) are considered a key enabler in today&rsquo;s manufacturing industry to cope with such dynamic and volatile markets. The literature confirms that the use of simulation-based multi-objective optimization offers a promising approach that leads to improvements in RMS. However, due to the dynamic behavior of real-world RMS, applying conventional optimization approaches can be very time-consuming, specifically when there is no general knowledge about the quality of solutions. Meanwhile, Pareto-optimal solutions may share some common design principles that can be discovered with data mining and machine learning methods and exploited by the optimization. In this study, the authors investigate a novel knowledge-driven optimization (KDO) approach to speed up the convergence in RMS applications. This approach generates generalized knowledge from previous scenarios, which is then applied to improve the efficiency of the optimization of new scenarios. This study applied the proposed approach to a multi-part flow line RMS that considers scalable capacities while addressing the tasks assignment to workstations and the buffer allocation problems. The results demonstrate how a KDO approach leads to convergence rate improvements in a real-world RMS case.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060105

Authors: Suleman Nasiru Abdul Ghaniyyu Abubakari Christophe Chesneau

Probability distributions are very useful in modeling lifetime datasets. However, no specific distribution is suitable for all kinds of datasets. In this study, the bounded truncated Cauchy power exponential distribution is proposed for modeling datasets on the unit interval. The probability density function exhibits desirable shapes, such as left-skewed, right-skewed, reversed J, and bathtub shapes, whereas the hazard rate function displays J and bathtub shapes. For the purpose of modeling dependence between measures in a dataset, a bivariate extension of the proposed distribution is developed. The bivariate probability density function displays monotonic and non-monotonic shapes, making it suitable for modeling complex bivariate relations. Subsequently, the applications of the distribution are illustrated using COVID-19 data. The results revealed that the new distribution provides a better fit to the datasets compared to other existing distributions. Finally, a new quantile regression model is developed and its application demonstrated. The generated quantile regression model offers a decent fit to the data, according to the residual analysis.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060104

Authors: Abdisalam Hassan Muse Christophe Chesneau Oscar Ngesa Samuel Mwalili

This study aims to propose a flexible, fully parametric hazard-based regression model for censored time-to-event data with crossing survival curves. We call it the accelerated hazard (AH) model. The AH model can be written with or without a baseline distribution for lifetimes. The former assumption results in parametric regression models, whereas the latter results in semi-parametric regression models, which are by far the most commonly used in time-to-event analysis. However, under certain conditions, a parametric hazard-based regression model may produce more efficient estimates than a semi-parametric model. The parametric AH model, on the other hand, is inappropriate when the baseline distribution is exponential because it is constant over time; similarly, when the baseline distribution is the Weibull distribution, the AH model coincides with the accelerated failure time (AFT) and proportional hazard (PH) models. The use of a versatile parametric baseline distribution (generalized log-logistic distribution) for modeling the baseline hazard rate function is investigated. For the parameters of the proposed AH model, the classical (via maximum likelihood estimation) and Bayesian approaches using noninformative priors are discussed. A comprehensive simulation study was conducted to assess the performance of the proposed model&rsquo;s estimators. A real-life right-censored gastric cancer dataset with crossover survival curves is used to demonstrate the tractability and utility of the proposed fully parametric AH model. The study concluded that the parametric AH model is effective and could be useful for assessing a variety of survival data types with crossover survival curves.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060103

Authors: Antonio J. Nebro Jesús Galeano-Brajones Francisco Luna Carlos A. Coello Coello

NSGA-II is, by far, the most popular metaheuristic that has been adopted for solving multi-objective optimization problems. However, its most common usage, particularly when dealing with continuous problems, is circumscribed to a standard algorithmic configuration similar to the one described in its seminal paper. In this work, our aim is to show that the performance of NSGA-II, when properly configured, can be significantly improved in the context of large-scale optimization. It leverages a combination of tools for automated algorithmic tuning called irace, and a highly configurable version of NSGA-II available in the jMetal framework. Two scenarios are devised: first, by solving the Zitzler&ndash;Deb&ndash;Thiele (ZDT) test problems, and second, when dealing with a binary real-world problem of the telecommunications domain. Our experiments reveal that an auto-configured version of NSGA-II can properly address test problems ZDT1 and ZDT2 with up to 217=131,072 decision variables. The same methodology, when applied to the telecommunications problem, shows that significant improvements can be obtained with respect to the original NSGA-II algorithm when solving problems with thousands of bits.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060102

Authors: Ramesh Kune Hari Singh Naik Borra Shashidar Reddy Christophe Chesneau

The study is devoted to investigating the effect of an unsteady non-Newtonian Casson fluid over a vertical plate. A mathematical analysis is presented for a Casson fluid by taking into consideration Soret and Dufour effects, heat generation, heat radiation, and chemical reaction. The novelty of the problem is the physical interpretation of Casson fluid before and after adding copper water-based nanoparticles to the governing flow. It is found that velocity was decreased and the temperature profile was enhanced. A similarity transformation is used to convert the linked partial differential equations that control flow into non-linear coupled ordinary differential equations. The momentum, energy, and concentration formulations are cracked by means of the finite element method. The thermal and solute layer thickness growth is due to the nanoparticles&rsquo; thermo-diffusion. The effects of relevant parameters such as the Casson fluid parameter, radiation, Soret and Dufour effects, chemical reaction, and Prandtl number are discussed. A correlation of the average Nusselt number and Sherwood number corresponding to active parameters is presented. It can be noticed that increasing the Dufour number leads to an uplift in heat transfer. Fluid velocity increases with Grashof number and decreases with magnetic effect. The impact of heat sources and radiation is to increase the thermal conductivity. Concentration decreases with the Schmidt number.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060101

Authors: Gang Chen Lihua Wei Jiangyue Fu Chengjiang Li Gang Zhao

In recent years, the consensus-reaching process of large group decision making has attracted much attention in the research society, especially in emergency environment area. However, the decision information is always limited and inaccurate. The trust relationship among decision makers has been proven to exert important impacts on group consensus. In this study, we proposed a novel uncertain linguistic cloud similarity method based on trust update and the opinion interaction mechanism. Firstly, we transformed the linguistic preferences into clouds and used cloud similarity to divide large-scale decision makers into several groups. Secondly, an improved PageRank algorithm based on the trust relationship was developed to calculate the weights of decision makers. A combined weighting method considering the similarity and group size was also presented to calculate the weights of groups. Thirdly, a trust updating mechanism based on cloud similarity, consensus level, and cooperation willingness was developed to speed up the consensus-reaching process, and an opinion interaction mechanism was constructed to measure the consensus level of decision makers. Finally, a numerical experiment effectively illustrated the feasibility of the proposed method. The proposed method was proven to maximally retain the randomness and fuzziness of the decision information during a consensus-reaching process with fast convergent speed and good practicality.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060100

Authors: Houari Mechkour

In this article, we are interested in the behavior of a three-dimensional model of periodic perforated piezoelectric plate, when the thickness h of the plate and the size &epsilon; of the holes are small. We study the dependence of displacements and electric potential on h and &epsilon;, and give equivalent limits when h and &epsilon; tend towards zero. We compute analytical formulae for all effective properties of the periodic perforated piezoelectric plate.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060099

Authors: Kiran Pannerselvam Deepanshu Yadav Palaniappan Ramu

Importance sampling is a variance reduction technique that is used to improve the efficiency of Monte Carlo estimation. Importance sampling uses the trick of sampling from a distribution, which is located around the zone of interest of the primary distribution thereby reducing the number of realizations required for an estimate. In the context of reliability-based structural design, the limit state is usually separable and is of the form Capacity (C)&ndash;Response (R). The zone of interest for importance sampling is observed to be the region where these distributions overlap each other. However, often the distribution information of C and R themselves are not known, and one has only scarce realizations of them. In this work, we propose approximating the probability density function and the cumulative distribution function using kernel functions and employ these approximations to find the parameters of the importance sampling density (ISD) to eventually estimate the reliability. In the proposed approach, in addition to ISD parameters, the approximations also played a critical role in affecting the accuracy of the probability estimates. We assume an ISD which follows a normal distribution whose mean is defined by the most probable point (MPP) of failure, and the standard deviation is empirically chosen such that most of the importance sample realizations lie within the means of R and C. Since the probability estimate depends on the approximation, which in turn depends on the underlying samples, we use bootstrap to quantify the variation associated with the low failure probability estimate. The method is investigated with different tailed distributions of R and C. Based on the observations, a modified Hill estimator is utilized to address scenarios with heavy-tailed distributions where the distribution approximations perform poorly. The proposed approach is tested on benchmark reliability examples and along with surrogate modeling techniques is implemented on four reliability-based design optimization examples of which one is a multi-objective optimization problem.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060098

Authors: Kandasamy Jagan Sivanandam Sivasankaran

The objective of this paper is to investigate the 3D non-linearly thermally radiated flow of a Jeffrey nanofluid towards a stretchy surface with the Cattaneo&ndash;Christov heat flux (CCHF) model in the presence of a convective boundary condition.The Homotopy Analysis Method (HAM) is used to solve the ordinary differential equation that is obtained by reforming the governing equation using suitable transformations. The equations obtained from HAM are plotted graphically for different parameters. In addition, the skin-friction coefficient, local Nusselt number, and Sherwood number for various parameters are calculated and discussed. The velocity profile along the x- and y-directions decrease with a raise in the ratio of relaxation to retardation times. The concentration and temperature profile rises while magnifying the ratio of relaxation to retardation times. While raising the ratio parameter, the x-direction velocity, temperature, and concentration profile diminishes, whereas the y-direction velocity profile magnifies. Magnifying the Deborah number results in a rise in the velocity profile along the x- and y-directions, and a decline in the temperature and concentration profile.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060097

Authors: Himani Sharma Munish Kansal Ramandeep Behl

We propose a new iterative scheme without memory for solving nonlinear equations. The proposed scheme is based on a cubically convergent Hansen&ndash;Patrick-type method. The beauty of our techniques is that they work even though the derivative is very small in the vicinity of the required root or f&prime;(x)=0. On the contrary, the previous modifications either diverge or fail to work. In addition, we also extended the same idea for an iterative method with memory. Numerical examples and comparisons with some of the existing methods are included to confirm the theoretical results. Furthermore, basins of attraction are included to describe a clear picture of the convergence of the proposed method as well as that of some of the existing methods. Numerical experiments are performed on engineering problems, such as fractional conversion in a chemical reactor, Planck&rsquo;s radiation law problem, Van der Waal&rsquo;s problem, trajectory of an electron in between two parallel plates. The numerical results reveal that the proposed schemes are of utmost importance to be applied on various real&ndash;life problems. Basins of attraction also support this aspect.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060096

Authors: Xiangdong Liu Yan Bai Cunhui Yu Hailong Yang Haoning Gao Jing Wang Qing Chang Xiaodong Wen

The sparrow search algorithm (SSA) is a metaheuristic algorithm developed based on the foraging and anti-predatory behavior of sparrow populations. Compared with other metaheuristic algorithms, SSA also suffers from poor population diversity, has weak global comprehensive search ability, and easily falls into local optimality. To address the problems whereby the sparrow search algorithm tends to fall into local optimum and the population diversity decreases in the later stage of the search, an improved sparrow search algorithm (PGL-SSA) based on piecewise chaotic mapping, Gaussian difference variation, and linear differential decreasing inertia weight fusion is proposed. Firstly, we analyze the improvement of six chaotic mappings on the overall performance of the sparrow search algorithm, and we finally determine the initialization of the population by piecewise chaotic mapping to increase the initial population richness and improve the initial solution quality. Secondly, we introduce Gaussian difference variation in the process of individual iterative update and use Gaussian difference variation to perturb the individuals to generate a diversity of individuals so that the algorithm can converge quickly and avoid falling into localization. Finally, linear differential decreasing inertia weights are introduced globally to adjust the weights so that the algorithm can fully traverse the solution space with larger weights in the first iteration to avoid falling into local optimum, and we enhance the local search ability with smaller weights in the later iteration to improve the search accuracy of the optimal solution. The results show that the proposed algorithm has a faster convergence speed and higher search accuracy than the comparison algorithm, the global search capability is significantly enhanced, and it is easier to jump out of the local optimum. The improved algorithm is also applied to the Heating, Ventilation and Air Conditioning (HVAC) system control optimization direction, and the improved algorithm is used to optimize the parameters of the HVAC system Proportion Integral Differential (PID) controller. The results show that the PID controller optimized by the improved algorithm has higher control accuracy and system stability, which verifies the feasibility of the improved algorithm in practical engineering applications.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060095

Authors: Alexander Robitzsch

Guessing effects frequently occur in testing data in educational or psychological applications. Different item response models have been proposed to handle guessing effects in dichotomous test items. However, it has been pointed out in the literature that the often employed three-parameter logistic model poses implausible assumptions regarding the guessing process. The four-parameter guessing model has been proposed as an alternative to circumvent these conceptual issues. In this article, the four-parameter guessing model is compared with alternative item response models for handling guessing effects through a simulation study and an empirical example. It turns out that model selection for item response models should be rather based on the AIC than the BIC. However, the RMSD item fit statistic used with typical cutoff values was found to be ineffective in detecting misspecified item response models. Furthermore, sufficiently large sample sizes are required for sufficiently precise item parameter estimation. Moreover, it is argued that the criterion of the statistical model fit should not be the sole criterion of model choice. The item response model used in operational practice should be valid with respect to the meaning of the ability variable and the underlying model assumptions. In this sense, the four-parameter guessing model could be the model of choice in educational large-scale assessment studies.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060094

Authors: Siyuan Xing Jian-Qiao Sun

In this paper, we study the multi-objective optimization of the viscous boundary condition of an elastic rod using a hybrid method combining a genetic algorithm and simple cell mapping (GA-SCM). The method proceeds with the NSGAII algorithm to seek a rough Pareto set, followed by a local recovery process based on one-step simple cell mapping to complete the branch of the Pareto set. To accelerate computation, the rod response under impulsive loading is calculated with a particular solution method that provides accurate structural responses with less computational effort. The Pareto set and Pareto front of a case study are obtained with the GA-SCM hybrid method. Optimal designs of each objective function are illustrated through numerical simulations.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060093

Authors: Jaouad Danane

In this work, we study a capital&ndash;labor model by considering the interaction between the new proposed and the confirmed free jobs, the precariat labor force, and the mature labor force by introducing Brownian motion and L&eacute;vy noise. Moreover, we illustrate the well-posedness of the solution. In addition, we establish the conditions of the extinction of both the free jobs and labor force; subsequently, we prove the persistence of only the free jobs, and we also show the conditions of the persistence of both the free jobs and labor force. Finally, we validate our theoretical finding by numerical simulation by building a new stochastic Runge&ndash;Kutta method.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060092

Authors: Gabriel Thomaz de Aquino Pereira Ricardo J. Alves de Sousa I-Shih Liu Marcello Goulart Teixeira Fábio A. O. Fernandes

It is increasingly necessary to promote means of production that are less polluting and less harmful to the environment following the UN 2030 agenda for sustainable development. Using natural cellular materials in structural applications can be essential for enabling a future in this direction. Cork is a natural cellular material with an excellent energy absorption capacity. Its use in engineering applications and products has grown over time, so predicting its mechanical response through numerical tools is crucial. Classical cork modeling uses a model developed for foam material, including an adjustment function that does not have a clear physical interpretation. This work presents a new material model for an agglomerated cork based solely on well-known hypotheses of continuum mechanics using fewer parameters than the classical model and further a finite element framework to validate the new model against experimental data.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060091

Authors: Manoj Kumar Narayanaswamy Jagan Kandasamy Sivasankaran Sivanandam

The focal interest in this article is to investigate the Stefan blowing and Dufour and Soret effects on hybrid nanofluid (HNF) flow towards a stretching cylinder with thermal radiation. The governing equations are converted into ODE by using suitable transformations. The boundary value problem solver (bvp4c), which is a package in the MATLAB, is used to solve the resulting ODE equations. Results show that rise in the Stefan blowing enhances velocity, temperature, and concentration profiles. Heat transfer rate increases by up to 10% in the presence of 4% nanoparticle/HNF but mass transfer rate diminishes. Additionally, skin friction coefficient, Nusselt number and Sherwood number are examined for many parameters entangled in this article. Additionally, results are deliberatively discussed in detail.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060090

Authors: Carlos Aurelio Andreucci Elza M. M. Fonseca Renato N. Jorge

A new mechanism, applied in this study as a biomechanical device, known as a Bioactive Kinetic Screw (BKS) for bone implants is described. The BKS was designed as a bone implant, in which the bone particles, blood, cells, and protein molecules removed during bone drilling are used as a homogeneous autogenous transplant at the same implant site, aiming to optimize the healing process and simplify the surgical procedure. In this work, the amount of bone that will be compacted inside and around the new biomechanism was studied, based on the density of the bone applied. This study allows us to analyze the average bone density in humans (1.85 mg/mm3 or 1850 &micro;g/mm&sup3;) with four different synthetic bone densities (Sawbones PCF 10, 20, 30 and 40). The results show that across all four different synthetic bones densities, the bone within the new model is 3.45 times denser. After a pilot drill (with 10 mm length and 1.8 mm diameter), in cases where a guide hole is required, the increase in ratio is equal to 2.7 times inside and around the new biomechanism. The in vitro test validated the mathematical results, describing that in two different materials, the same compact factor of 3.45 was determined with the new biomechanical device. It was possible to describe that BKS can become a powerful tool in the diagnosis and treatment of natural bone conditions and any type of disease.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060089

Authors: Himanshukumar R. Patel Vipul A. Shah

In recent, various metaheuristic algorithms have shown significant results in control engineering problems; moreover, fuzzy sets (FSs) and theories were frequently used for dynamic parameter adaption in metaheuristic algorithms. The primary reason for this is that fuzzy inference system (FISs) can be designed using human knowledge, allowing for intelligent dynamic adaptations of metaheuristic parameters. To accomplish these tasks, we proposed shadowed type-2 fuzzy inference systems (ST2FISs) for two metaheuristic algorithms, namely cuckoo search (CS) and flower pollination (FP). Furthermore, with the advent of shadowed type-2 fuzzy logic, the abilities of uncertainty handling offer an appealing improved performance for dynamic parameter adaptation in metaheuristic methods; moreover, the use of ST2FISs has been shown in recent works to provide better results than type-1 fuzzy inference systems (T1FISs). As a result, ST2FISs are proposed for adjusting the L&egrave;vy flight (P) and switching probability (P&prime;) parameters in the original cuckoo search (CS) and flower pollination (FP) algorithms, respectively. Our approach investigated trapezoidal types of membership functions (MFs), such as ST2FSs. The proposed method was used to optimize the precursors and implications of a two-tank non-interacting conical frustum tank level (TTNCFTL) process using an interval type-2 fuzzy controller (IT2FLC). To ensure that the implementation is efficient compared with the original CS and FP algorithms, simulation results were obtained without and then with uncertainty in the main actuator (CV1) and system component (leak) at the bottom of frustum tank two of the TTNCFLT process. In addition, the statistical z-test and non-parametric Friedman test are performed to analyze and deliver the findings for the best metaheuristic algorithm. The reported findings highlight the benefits of employing this approach over traditional general type-2 fuzzy inference systems since we get superior performance in the majority of cases while using minimal computational resources.

]]>Mathematical and Computational Applications doi: 10.3390/mca27060088

Authors: Muaaz Bhamjee Simon H. Connell André Leon Nel

The aim in this study was to determine how surging modifies the dynamic behaviour of the cyclonic flow in a hydrocyclone using computational fluid and granular dynamics models. The Volume-of-Fluid model was used to model the air-core formation. Fluid&ndash;particle, particle&ndash;particle, and particle&ndash;wall interactions were modelled using an unsteady two-way coupled Discrete Element Method. Turbulence was modelled using both the Reynold&rsquo;s Stress Model and the Large Eddy Simulation. The model predictions indicate that the phenomenon of surging modifies the dynamics of the cyclonic flow in hydrocyclones and subsequently impacts separation. The results reveal that the primary cyclonic separation mechanisms break down during surging and result in air-core suppression. The flow and primary separation mechanism in the core of the hydrocyclone is driven by the pressure drop and the flow and primary separation mechanism near the wall is primarily driven by the gravitational and centrifugal force-induced momentum. However, surging causes a breakdown in this mechanism by swapping this primary flow and separation behaviour, where the pressure drop becomes the primary driver of the flow near the walls and gravitational and centrifugal force-induced momentum primarily drives the flow in the core of the hydrocyclone.

]]>Mathematical and Computational Applications doi: 10.3390/mca27050087

Authors: Clarissa Astuto Daniele Boffi Jan Haskovec Peter Markowich Giovanni Russo

We compare the solutions of two systems of partial differential equations (PDEs), seen as two different interpretations of the same model which describes the formation of complex biological networks. Both approaches take into account the time evolution of the medium flowing through the network, and we compute the solution of an elliptic&ndash;parabolic PDE system for the conductivity vector m, the conductivity tensor C and the pressure p. We use finite differences schemes in a uniform Cartesian grid in a spatially two-dimensional setting to solve the two systems, where the parabolic equation is solved using a semi-implicit scheme in time. Since the conductivity vector and tensor also appear in the Poisson equation for the pressure p, the elliptic equation depends implicitly on time. For this reason, we compute the solution of three linear systems in the case of the conductivity vector m&isin;R2 and four linear systems in the case of the symmetric conductivity tensor C&isin;R2&times;2 at each time step. To accelerate the simulations, we make use of the Alternating Direction Implicit (ADI) method. The role of the parameters is important for obtaining detailed solutions. We provide numerous tests with various values of the parameters involved to determine the differences in the solutions of the two systems.

]]>Mathematical and Computational Applications doi: 10.3390/mca27050086

Authors: Pedro Machado Sofia J. Pinheiro Vera Afreixo Cristiana J. Silva Rui Leitão

The COVID-19 pandemic remains a global problem that affects the health of millions of people and the world economy. Identifying how the movement of people between regions of the world, countries, and municipalities and how the close contact between individuals of different age groups promotes the spread of infectious diseases is a pressing concern for society, during epidemic outbreaks and pandemics, such as COVID-19. Networks and Graph Theory provide adequate and powerful tools to study the spread of communicable diseases. In this work, we use Graph Theory to analyze COVID-19 transmission dynamics between municipalities of Aveiro district, in Portugal, and between different age groups, considering data from 2020 and 2021, in order to better understand the spread of this disease, as well as preparing actions for possible future pandemics. We used a digraph structure that models the transmission of SARS-CoV-2 virus between Aveiro&rsquo;s municipalities and between age groups. To understand how a node fits over the contact digraphs, we studied centrality measures, namely eigencentrality, closeness, degree, and betweenness. Transmission ratios were also considered to determine whether there were certain age groups or municipals that were more responsible for the virus&rsquo;s spread. According to the results of this research, transmissions mostly occur within the same social groupings, that is, within the same municipalities and age groups. However, the study of centrality measures, eliminating loops, reveals that municipalities such as Aveiro, Estarreja and Ovar are relevant nodes in the transmission network of municipalities as well as the age group of 40&ndash;49 in the transmission network of age groups. Furthermore, we conclude that vaccination is effective in reducing the virus.

]]>Mathematical and Computational Applications doi: 10.3390/mca27050085

Authors: Fatima Ezzahra Fikri Karam Allali

The objective of this paper is to study a new mathematical model describing the human immunodeficiency virus (HIV). The model incorporates the impacts of cytotoxic T lymphocyte (CTL) immunity and antibodies with trilinear growth functions. The boundedness and positivity of solutions for non-negative initial data are proved, which is consistent with biological studies. The local stability of the equilibrium is established. Finally, numerical simulations are presented to support our theoretical findings.

]]>