AppliedMath doi: 10.3390/appliedmath4020028

Authors: Edoardo Ballico

We study properties of the minimal Terracini loci, i.e., families of certain zero-dimensional schemes, in a projective plane. Among the new results here are: a maximality theorem and the existence of arbitrarily large gaps or non-gaps for the integers x for which the minimal Terracini locus in degree d is non-empty. We study similar theorems for the critical schemes of the minimal Terracini sets. This part is framed in a more general framework.

]]>AppliedMath doi: 10.3390/appliedmath4020027

Authors: Mumuni Amadu Adango Miadonye

The transition zone (TZ) of hydrocarbon reservoirs is an integral part of the hydrocarbon pool which contains a substantial fraction of the deposit, particularly in carbonate petroleum systems. Consequently, knowledge of its thickness and petrophysical properties, viz. its pore size distribution and wettability characteristic, is critical to optimizing hydrocarbon production in this zone. Using classical formation evaluation techniques, the thickness of the transition zone has been estimated, using well logging methods including resistivity and Nuclear Magnetic Resonance, among others. While hydrocarbon fluids&rsquo; accumulation in petroleum reservoirs occurs due to the migration and displacement of originally water-filled potential structural and stratigraphic traps, the development of their TZ integrates petrophysical processes that combine spontaneous capillary imbibition and wettability phenomena. In the literature, wettability phenomena have been shown to also be governed by electrostatic phenomena. Therefore, given that reservoir rocks are aggregates of minerals with ionizable surface groups that facilitate the development of an electric double layer, a definite theoretical relationship between the TZ and electrostatic theory must be feasible. Accordingly, a theoretical approach to estimating the TZ thickness, using the electrostatic theory and based on the electric double layer theory, is attractive, but this is lacking in the literature. Herein, we fill the knowledge gap by using the interfacial electrostatic theory based on the fundamental tenets of the solution to the Poisson&ndash;Boltzmann mean field theory. Accordingly, we have used an existing model of capillary rise based on free energy concepts to derive a capillary rise equation that can be used to theoretically predict observations based on the TZ thickness of different reservoir rocks, using well-established formation evaluation methods. The novelty of our work stems from the ability of the model to theoretically and accurately predict the TZ thickness of the different lithostratigraphic units of hydrocarbon reservoirs, because of the experimental accessibility of its model parameters.

]]>AppliedMath doi: 10.3390/appliedmath4020026

Authors: Juan José Fernández-Durán María Mercedes Gregorio-Domínguez

The sum of independent circular uniformly distributed random variables is also circular uniformly distributed. In this study, it is shown that a family of circular distributions based on nonnegative trigonometric sums (NNTS) is also closed under summation. Given the flexibility of NNTS circular distributions to model multimodality and skewness, these are good candidates for use as alternative models to test for circular uniformity to detect different deviations from the null hypothesis of circular uniformity. The circular uniform distribution is a member of the NNTS family, but in the NNTS parameter space, it corresponds to a point on the boundary of the parameter space, implying that the regularity conditions are not satisfied when the parameters are estimated by using the maximum likelihood method. Two NNTS tests for circular uniformity were developed by considering the standardised maximum likelihood estimator and the generalised likelihood ratio. Given the nonregularity condition, the critical values of the proposed NNTS circular uniformity tests were obtained via simulation and interpolated for any sample size by the fitting of regression models. The validity of the proposed NNTS circular uniformity tests was evaluated by generating NNTS models close to the circular uniformity null hypothesis.

]]>AppliedMath doi: 10.3390/appliedmath4020025

Authors: David Ellerman

The new approach to quantum mechanics (QM) is that the mathematics of QM is the linearization of the mathematics of partitions (or equivalence relations) on a set. This paper develops those ideas using vector spaces over the field Z2={0.1} as a pedagogical or toy model of (finite-dimensional, non-relativistic) QM. The 0,1-vectors are interpreted as sets, so the model is &ldquo;quantum mechanics over sets&rdquo; or QM/Sets. The key notions of partitions on a set are the logical-level notions to model distinctions versus indistinctions, definiteness versus indefiniteness, or distinguishability versus indistinguishability. Those pairs of concepts are the key to understanding the non-classical &lsquo;weirdness&rsquo; of QM. The key non-classical notion in QM is the notion of superposition, i.e., the notion of a state that is indefinite between two or more definite- or eigen-states. As Richard Feynman emphasized, all the weirdness of QM is illustrated in the double-slit experiment, so the QM/Sets version of that experiment is used to make the key points.

]]>AppliedMath doi: 10.3390/appliedmath4020024

Authors: Ghazanfar Shahgholian Arman Fathollahi

The frequency deviation from the nominal working frequency in power systems is a consequence of the imbalance between total electrical loads and the aggregate power supplied by production units. The sensitivity of energy system frequency to both minor and major load variations underscore the need for effective frequency load control mechanisms. In this paper, frequency load control in single-area power system with multi-source energy is analysed and simulated. Also, the effect of the photovoltaic system on the frequency deviation changes in the energy system is shown. In the single area energy system, the dynamics of thermal turbine with reheat, thermal turbine without reheat and hydro turbine are considered. The simulation results using Simulink/Matlab and model analysis using eigenvalue analysis show the dynamic behaviour of the power system in response to changes in the load.

]]>AppliedMath doi: 10.3390/appliedmath4020023

Authors: John Constantine Venetis

In this paper, an analytical exact form of the ramp function is presented. This seminal function constitutes a fundamental concept of the digital signal processing theory and is also involved in many other areas of applied sciences and engineering. In particular, the ramp function is performed in a simple manner as the pointwise limit of a sequence of real and continuous functions with pointwise convergence. This limit is zero for strictly negative values of the real variable x, whereas it coincides with the independent variable x for strictly positive values of the variable x. Here, one may elucidate beforehand that the pointwise limit of a sequence of continuous functions can constitute a discontinuous function, on the condition that the convergence is not uniform. The novelty of this work, when compared to other research studies concerning analytical expressions of the ramp function, is that the proposed formula is not exhibited in terms of miscellaneous special functions, e.g., gamma function, biexponential function, or any other special functions, such as error function, hyperbolic function, orthogonal polynomials, etc. Hence, this formula may be much more practical, flexible, and useful in the computational procedures, which are inserted into digital signal processing techniques and other engineering practices.

]]>AppliedMath doi: 10.3390/appliedmath4020022

Authors: Liang Kong Yanhui Guo Chung-wei Lee

Accurate forecasting of the coronavirus disease 2019 (COVID-19) spread is indispensable for effective public health planning and the allocation of healthcare resources at all levels of governance, both nationally and globally. Conventional prediction models for the COVID-19 pandemic often fall short in precision, due to their reliance on homogeneous time-dependent transmission rates and the oversight of geographical features when isolating study regions. To address these limitations and advance the predictive capabilities of COVID-19 spread models, it is imperative to refine model parameters in accordance with evolving insights into the disease trajectory, transmission rates, and the myriad economic and social factors influencing infection. This research introduces a novel hybrid model that combines classic epidemic equations with a recurrent neural network (RNN) to predict the spread of the COVID-19 pandemic. The proposed model integrates time-dependent features, namely the numbers of individuals classified as susceptible, infectious, recovered, and deceased (SIRD), and incorporates human mobility from neighboring regions as a crucial spatial feature. The study formulates a discrete-time function within the infection component of the SIRD model, ensuring real-time applicability while mitigating overfitting and enhancing overall efficiency compared to various existing models. Validation of the proposed model was conducted using a publicly available COVID-19 dataset sourced from Italy. Experimental results demonstrate the model&rsquo;s exceptional performance, surpassing existing spatiotemporal models in three-day ahead forecasting. This research not only contributes to the field of epidemic modeling but also provides a robust tool for policymakers and healthcare professionals to make informed decisions in managing and mitigating the impact of the COVID-19 pandemic.

]]>AppliedMath doi: 10.3390/appliedmath4010021

Authors: Yuhlong Lio Ding-Geng Chen Tzong-Ru Tsai Liang Wang

The reliability of the multicomponent stress&ndash;strength system was investigated under the two-parameter Burr X distribution model. Based on the structure of the system, the type II censored sample of strength and random sample of stress were obtained for the study. The maximum likelihood estimators were established by utilizing the type II censored Burr X distributed strength and complete random stress data sets collected from the multicomponent system. Two related approximate confidence intervals were achieved by utilizing the delta method under the asymptotic normal distribution theory and parametric bootstrap procedure. Meanwhile, point and confidence interval estimators based on alternative generalized pivotal quantities were derived. Furthermore, a likelihood ratio test to infer the equality of both scalar parameters is provided. Finally, a practical example is provided for illustration.

]]>AppliedMath doi: 10.3390/appliedmath4010020

Authors: Lucian Trifina Daniela Tărniceriu Ana-Mirela Rotopănescu

In this paper, we address the inverse of a true fourth-degree permutation polynomial (4-PP), modulo a positive integer of the form&nbsp;32kL&Psi;, where&nbsp;kL&isin;{1,3}&nbsp;and&nbsp;&Psi;&nbsp;is a product of different prime numbers greater than three. Some constraints are considered for the 4-PPs to avoid some complicated coefficients&rsquo; conditions. With the fourth- and third-degree coefficients of the form&nbsp;k4,f&Psi;&nbsp;and&nbsp;k3,f&Psi;, respectively, we prove that the inverse PP is (I) a 4-PP when&nbsp;k4,f&isin;{1,3}&nbsp;and&nbsp;k3,f&isin;{1,3,5,7}&nbsp;or when&nbsp;k4,f=2&nbsp;and (II) a 5-PP when&nbsp;k4,f&isin;{1,3}&nbsp;and&nbsp;k3,f&isin;{0,2,4,6}.

]]>AppliedMath doi: 10.3390/appliedmath4010019

Authors: Michel Adès Serge B. Provost Yishan Zang

Four measures of association, namely, Spearman&rsquo;s &rho;, Kendall&rsquo;s &tau;, Blomqvist&rsquo;s &beta; and Hoeffding&rsquo;s &Phi;2, are expressed in terms of copulas. Conveniently, this article also includes explicit expressions for their empirical counterparts. Moreover, copula representations of the four coefficients are provided for the multivariate case, and several specific applications are pointed out. Additionally, a numerical study is presented with a view to illustrating the types of relationships that each of the measures of association can detect.

]]>AppliedMath doi: 10.3390/appliedmath4010018

Authors: Alexander Melnikov Pouneh Mohammadi Nejad

This paper investigates a financial market where asset prices follow a multi-dimensional Brownian motion process and a multi-dimensional Poisson process characterized by diverse credit and deposit rates where the credit rate is higher than the deposit rate. The focus extends to evaluating European options by establishing upper and lower hedging prices through a transition to a suitable auxiliary market. Introducing a lemma elucidates the same solution to the pricing problem in both markets under specific conditions. Additionally, we address the minimization of shortfall risk and determine no-arbitrage price bounds within the framework of incomplete markets. This study provides a comprehensive understanding of the challenges posed by the multi-dimensional jump-diffusion model and varying interest rates in financial markets.

]]>AppliedMath doi: 10.3390/appliedmath4010017

Authors: Salma A. A. Ahmedai Abd Allah Precious Sibanda Sicelo P. Goqo Uthman O. Rufai Hloniphile Sithole Mthethwa Osman A. I. Noreldin

In this paper, we extend the block hybrid method with equally spaced intra-step points to solve linear and nonlinear third-order initial value problems. The proposed block hybrid method uses a simple iteration scheme to linearize the equations. Numerical experimentation demonstrates that equally spaced grid points for the block hybrid method enhance its speed of convergence and accuracy compared to other conventional block hybrid methods in the literature. This improvement is attributed to the linearization process, which avoids the use of derivatives. Further, the block hybrid method is consistent, stable, and gives rapid convergence to the solutions. We show that the simple iteration method, when combined with the block hybrid method, exhibits impressive convergence characteristics while preserving computational efficiency. In this study, we also implement the proposed method to solve the nonlinear Jerk equation, producing comparable results with other methods used in the literature.

]]>AppliedMath doi: 10.3390/appliedmath4010016

Authors: Marco Antonio Montufar Benítez Jaime Mora Vargas José Raúl Castro Esparza Héctor Rivera Gómez Oscar Montaño Arango

The main purpose of this paper is to implement a simulation model in @RISKTM and study the impact of incorporating random variables, such as the degree days in a traditional deterministic model, for calculating the optimum thickness of thermal insulation in walls. Currently, green buildings have become important because of the increasing worldwide interest in the reduction of environmental pollution. One method of saving energy is to use thermal insulation. The optimum thickness of these insulators has traditionally been calculated using deterministic models. With the information generated from real data using the degree days required in a certain zone in Palestine during winter, random samples of the degree days required annually in this town were generated for periods of 10, 20, 50, and 70 years. The results showed that the probability of exceeding the net present value of the cost calculated using deterministic analysis ranges from 0% to 100%, without regard to the inflation rate. The results also show that, for design lifetimes greater than 40 years, the risk of overspending is lower if the building lasts longer than the period for which it was designed. Moreover, this risk is transferred to whomever will pay the operating costs of heating the building. The contribution of this research is twofold: (a) a stochastic approach is incorporated into the traditional models that determine the optimum thickness of thermal insulation used in buildings, by introducing the variability of the degree days required in a given region; (b) a measure of the economic risk incurred by building heating is established as a function of the years of use for which the building is designed and the number of years it is actually used.

]]>AppliedMath doi: 10.3390/appliedmath4010015

Authors: Constantin Fetecau Costică Moroşanu Shehraz Akhtar

In this work, we investigate isothermal MHD motions of a large class of rate type fluids through a porous medium between two infinite horizontal parallel plates when a differential expression of the non-trivial shear stress is prescribed on the boundary. Exact expressions are provided for the dimensionless steady state velocities, shear stresses and Darcy&rsquo;s resistances. Obtained solutions can be used to find the necessary time to touch the steady state or to bring to light certain characteristics of the fluid motion. Graphical representations showed the fluid moves slower in presence of a magnetic field or porous medium. In addition, contrary to our expectations, the volume flux across a plane orthogonal to the velocity vector per unit width of this plane is zero. Finally, based on a simple remark regarding the governing equations of velocity and shear stress for MHD motions of incompressible generalized Burgers&rsquo; fluids between infinite parallel plates, provided were the first exact solutions for MHD motions of these fluids when the two plates apply oscillatory or constant shear stresses to the fluid. This important remark offers the possibility to solve any isothermal MHD motion of these fluids between infinite parallel plates or over an infinite plate when the non-trivial shear stress is prescribed on the boundary. As an application, steady state solutions for MHD motions of same fluids have been developed when a differential expression of the fluid velocity is prescribed on the boundary.

]]>AppliedMath doi: 10.3390/appliedmath4010014

Authors: Manabu Ichino

The quantile method transforms each complex object described by different histogram values to a common number of quantile vectors. This paper retraces the authors&rsquo; research, including a principal component analysis, unsupervised feature selection using hierarchical conceptual clustering, and lookup table regression model. The purpose is to show that this research is essentially based on the monotone property of quantile vectors and works cooperatively in the exploratory analysis of the given distributional data.

]]>AppliedMath doi: 10.3390/appliedmath4010013

Authors: Vladimir Volenec Marija Šimić Horvath Ema Jurkin

In this paper, we study the properties of a complete quadrangle in the Euclidean plane. The proofs are based on using rectangular coordinates symmetrically on four vertices and four parameters a,b,c,d. Here, many properties of the complete quadrangle known from earlier research are proved using the same method, and some new results are given.

]]>AppliedMath doi: 10.3390/appliedmath4010012

Authors: Elisabetta Barletta Sorin Dragomir Francesco Esposito

We study the random flow, through a thin cylindrical tube, of a physical quantity of random density, in the presence of random sinks and sources. We model convection in terms of the expectations of the flux and density and solve the initial value problem for the resulting convection equation. We propose a difference scheme for the convection equation, that is both stable and satisfies the Courant&ndash;Friedrichs&ndash;Lewy test, and estimate the difference between the exact and approximate solutions.

]]>AppliedMath doi: 10.3390/appliedmath4010011

Authors: Robert Gardner Kazeem Kosebinu

Graph and digraph decompositions are a fundamental part of design theory. Probably the best known decompositions are related to decomposing the complete graph into 3-cycles (which correspond to Steiner triple systems), and decomposing the complete digraph into orientations of a 3-cycle (the two possible orientations of a 3-cycle correspond to directed triple systems and Mendelsohn triple systems). Decompositions of the &lambda;-fold complete graph and the &lambda;-fold complete digraph have been explored, giving generalizations of decompositions of complete simple graphs and digraphs. Decompositions of the complete mixed graph (which contains an edge and two distinct arcs between every two vertices) have also been explored in recent years. Since the complete mixed graph has twice as many arcs as edges, an isomorphic decomposition of a complete mixed graph into copies of a sub-mixed graph must involve a sub-mixed graph with twice as many arcs as edges. A partial orientation of a 6-star with two edges and four arcs is an example of such a mixed graph; there are five such mixed stars. In this paper, we give necessary and sufficient conditions for a decomposition of the &lambda;-fold complete mixed graph into each of these five mixed stars for all &lambda;&gt;1.

]]>AppliedMath doi: 10.3390/appliedmath4010010

Authors: Frederika Rentzeperis Benjamin Coleman Dorothy Wallace

Radiotherapy can differentially affect the phases of the cell cycle, possibly enhancing suppression of tumor growth, if cells are synchronized in a specific phase. A model is designed to replicate experiments that synchronize cells in the S phase using gemcitabine before radiation at various doses, with the goal of quantifying this effect. The model is used to simulate a clinical trial with a cohort of 100 individuals receiving only radiation and another cohort of 100 individuals receiving radiation after cell synchronization. The simulations offered in this study support the statement that, at suitably high levels of radiation, synchronizing melanoma cells with gemcitabine before treatment substantially reduces the final tumor size. The improvement is statistically significant, and the effect size is noticeable, with the near suppression of growth at 8 Gray and 92% synchronization.

]]>AppliedMath doi: 10.3390/appliedmath4010009

Authors: Benito Chen-Charpentier

Hepatitis B is a liver disease caused by the human hepatitis B virus (HBV). Mathematical models help further the understanding of the processes involved and help make predictions. The basic reproduction number, R0, is an index that predicts whether the disease will be chronic or not. This is the single most-important information that a mathematical model can give. Within-host virus processes involve delays. We study two within-host hepatitis B virus infection models without and with delay. One is a standard one, and the other considering additional processes and with two delays is new. We analyze the basic reproduction number and alternative threshold indices. The values of R0 and the alternative indices change depending on the model. All these indices predict whether the infection will persist or not, but they do not give the same rate of growth of the infection when it is starting. Therefore, the choice of the model is very important in establishing whether the infection is chronic or not and how fast it initially grows. We analyze these indices to see how to decrease their value. We study the effect of adding delays and how the threshold indices depend on how the delays are included. We do this by studying the local asymptotic stability of the disease-free equilibrium or by using an equivalent method. We show that, for some models, the indices do not change by introducing delays, but they change when the delays are introduced differently. Numerical simulations are presented to confirm the results. Finally, some conclusions are presented.

]]>AppliedMath doi: 10.3390/appliedmath4010008

Authors: Peter Berzi

A system of simultaneous multi-variable nonlinear equations can be solved by Newton&rsquo;s method with local q-quadratic convergence if the Jacobian is analytically available. If this is not the case, then quasi-Newton methods with local q-superlinear convergence give solutions by approximating the Jacobian in some way. Unfortunately, the quasi-Newton condition (Secant equation) does not completely specify the Jacobian approximate in multi-dimensional cases, so its full-rank update is not possible with classic variants of the method. The suggested new iteration strategy (&ldquo;T-Secant&rdquo;) allows for a full-rank update of the Jacobian approximate in each iteration by determining two independent approximates for the solution. They are used to generate a set of new independent trial approximates; then, the Jacobian approximate can be fully updated. It is shown that the T-Secant approximate is in the vicinity of the classic quasi-Newton approximate, providing that the solution is evenly surrounded by the new trial approximates. The suggested procedure increases the superlinear convergence of the Secant method &phi;S=1.618&hellip; to super-quadratic &phi;T=&phi;S+1=2.618&hellip; and the quadratic convergence of the Newton method &phi;N=2 to cubic &phi;T=&phi;N+1=3 in one-dimensional cases. In multi-dimensional cases, the Broyden-type efficiency (mean convergence rate) of the suggested method is an order higher than the efficiency of other classic low-rank-update quasi-Newton methods, as shown by numerical examples on a Rosenbrock-type test function with up to 1000 variables. The geometrical representation (hyperbolic approximation) in single-variable cases helps explain the basic operations, and a vector-space description is also given in multi-variable cases.

]]>AppliedMath doi: 10.3390/appliedmath4010007

Authors: Emilio Matricciani

The purpose of the present paper is to further investigate the mathematical structure of sentences&mdash;proposed in a recent paper&mdash;and its connections with human short&ndash;term memory. This structure is defined by two independent variables which apparently engage two short&ndash;term memory buffers in a series. The first buffer is modelled according to the number of words between two consecutive interpunctions&mdash;variable referred to as the word interval, IP&mdash;which follows Miller&rsquo;s 7&plusmn;2 law; the second buffer is modelled by the number of word intervals contained in a sentence, MF, ranging approximately for one to seven. These values result from studying a large number of literary texts belonging to ancient and modern alphabetical languages. After studying the numerical patterns (combinations of IP and MF) that determine the number of sentences that theoretically can be recorded in the two memory buffers&mdash;which increases with the use of IP and MF&mdash;we compare the theoretical results with those that are actually found in novels from Italian and English literature. We have found that most writers, in both languages, write for readers with small memory buffers and, consequently, are forced to reuse sentence patterns to convey multiple meanings.

]]>AppliedMath doi: 10.3390/appliedmath4010006

Authors: Kabiru Michael Adeyemo Kayode Oshinubi Umar Muhammad Adam Adejimi Adeniji

A co-infection model for onchocerciasis and Lassa fever (OLF) with periodic variational vectors and optimal control is studied and analyzed to assess the impact of controls against incidence infections. The model is qualitatively examined in order to evaluate its asymptotic behavior in relation to the equilibria. Employing a Lyapunov function, we demonstrated that the disease-free equilibrium (DFE) is globally asymptotically stable; that is, the related basic reproduction number is less than unity. When it is bigger than one, we use a suitable nonlinear Lyapunov function to demonstrate the existence of a globally asymptotically stable endemic equilibrium (EE). Furthermore, the necessary conditions for the presence of optimum control and the optimality system for the co-infection model are established using Pontryagin&rsquo;s maximum principle. The model is quantitatively analyzed by studying how sensitive the basic reproduction number is to the model parameters and the model simulation using Runge&ndash;Kutta technique of order 4 is also presented to study the effects of the treatments. We deduced from the quantitative analysis that, if there is an effective treatment and diagnosis of those exposed to and infected with the disease, the spread of the viral disease can be effectively managed. The results presented in this work will be useful for the proper mitigation of the disease.

]]>AppliedMath doi: 10.3390/appliedmath4010005

Authors: Ayan Bhattacharya

It is common in financial markets for market makers to offer prices on derivative instruments even though they are uncertain about the underlying asset&rsquo;s value. This paper studies the mathematical problem that arises as a result. Derivatives are priced in the risk-neutral framework, so as the market maker acquires more information about the underlying asset, the change of measure for transition to the risk-neutral framework (the pricing kernel) evolves. This evolution takes a precise form when the market maker is Bayesian. It is shown that Bayesian updates can be characterized as additional informational drift in the underlying asset&rsquo;s stochastic process. With Bayesian updates, the change of measure needed for pricing derivatives is two-fold: the first change is from the prior probability measure to the posterior probability measure, and the second change is from the posterior probability measure to the risk-neutral measure. The relation between the regular pricing kernel and the pricing kernel under this two-fold change of measure is characterized.

]]>AppliedMath doi: 10.3390/appliedmath4010004

Authors: Joan-Carles Artés Jaume Llibre Nicolae Vulpe

The following differential quadratic polynomial differential system &nbsp;dxdt=y&minus;x,&nbsp;dydt=2y&minus;y&gamma;&minus;12&minus;&gamma;y&minus;5&gamma;&minus;4&gamma;&minus;1x, when the parameter &gamma;&isin;(1,2] models the structure equations of an isotropic star having a linear barotropic equation of state, being x=m(r)/r where m(r)&ge;0 is the mass inside the sphere of radius r of the star, y=4&pi;r2&rho; where &rho; is the density of the star, and t=ln(r/R) where R is the radius of the star. First, we classify all the topologically non-equivalent phase portraits in the Poincar&eacute; disc of these quadratic polynomial differential systems for all values of the parameter &gamma;&isin;R&#8726;{1}. Second, using the information of the different phase portraits obtained we classify the possible limit values of m(r)/r and 4&pi;r2&rho; of an isotropic star when r decreases.

]]>AppliedMath doi: 10.3390/appliedmath4010003

Authors: Paul Romatschke

If a quantum field theory has a Landau pole, the theory is usually called &lsquo;sick&rsquo; and dismissed as a candidate for an interacting UV-complete theory. In a recent study on the interacting 4d O(N) model at large N, it was shown that at the Landau pole, observables remain well-defined and finite. In this work, I investigate both relevant and irrelevant deformations of the said model at the Landau pole, finding that physical observables remain unaffected. Apparently, the Landau pole in this theory is benign. As a phenomenological application, I compare the O(N) model to QCD by identifying &Lambda;MS&macr; with the Landau pole in the O(N) model.

]]>AppliedMath doi: 10.3390/appliedmath4010002

Authors: Maria de Fátima Brilhante Dinis Pestana Pedro Pestana Maria Luísa Rocha

Modeling the vulnerabilities lifecycle and exploitation frequency are at the core of security of networks evaluation. Pareto, Weibull, and log-normal models have been widely used to model the exploit and patch availability dates, the time to compromise a system, the time between compromises, and the exploitation volumes. Random samples (systematic and simple random sampling) of the time from publication to update of cybervulnerabilities disclosed in 2021 and in 2022 are analyzed to evaluate the goodness-of-fit of the traditional Pareto and log-normal laws. As censoring and thinning almost surely occur, other heavy-tailed distributions in the domain of attraction of extreme value or geo-extreme value laws are investigated as suitable alternatives. Goodness-of-fit tests, the Akaike information criterion (AIC), and the Vuong test, support the statistical choice of log-logistic, a geo-max stable law in the domain of attraction of the Fr&eacute;chet model of maxima, with hyperexponential and general extreme value fittings as runners-up. Evidence that the data come from a mixture of differently stretched populations affects vulnerabilities scoring systems, specifically the common vulnerabilities scoring system (CVSS).

]]>AppliedMath doi: 10.3390/appliedmath4010001

Authors: Loukas Zachilas Christos Benos

Our aim is to provide an insight into the procedures and the dynamics that lead the spread of contagious diseases through populations. Our simulation tool can increase our understanding of the spatial parameters that affect the diffusion of a virus. SIR models are based on the hypothesis that populations are &ldquo;well mixed&rdquo;. Our model constitutes an attempt to focus on the effects of the specific distribution of the initially infected individuals through the population and provide insights, considering the stochasticity of the transmission process. For this purpose, we represent the population using a square lattice of nodes. Each node represents an individual that may or may not carry the virus. Nodes that carry the virus can only transfer it to susceptible neighboring nodes. This important revision of the common SIR model provides a very realistic property: the same number of initially infected individuals can lead to multiple paths, depending on their initial distribution in the lattice. This property creates better predictions and probable scenarios to construct a probability function and appropriate confidence intervals. Finally, this structure permits realistic visualizations of the results to understand the procedure of contagion and spread of a disease and the effects of any measures applied, especially mobility restrictions, among countries and regions.

]]>AppliedMath doi: 10.3390/appliedmath3040052

Authors: Roberto Herrero Joan Nieves Augusto Gonzalez

The innate immune system is the first line of defense against pathogens. Its composition includes barriers, mucus, and other substances as well as phagocytic and other cells. The purpose of the present paper is to compare tissues with regard to their immune response to infections and to cancer. Simple ideas and the qualitative theory of differential equations are used along with general principles such as the minimization of the pathogen load and economy of resources. In the simplest linear model, the annihilation rate of pathogens in any tissue should be greater than the pathogen&rsquo;s average replication rate. When nonlinearities are added, a stability condition emerges, which relates the strength of regular threats, barrier height, and annihilation rate. The stability condition allows for a comparison of immunity in different tissues. On the other hand, in cancer immunity, the linear model leads to an expression for the lifetime risk, which accounts for both the effects of carcinogens (endogenous or external) and the immune response. The way the tissue responds to an infection shows a correlation with the way it responds to cancer. The results of this paper are formulated in the form of precise statements in such a way that they could be checked by present-day quantitative immunology.

]]>AppliedMath doi: 10.3390/appliedmath3040051

Authors: Ekta Sharma Shubham Kumar Mittal J. P. Jaiswal Sunil Panday

New three-step with-memory iterative methods for solving nonlinear equations are presented. We have enhanced the convergence order of an existing eighth-order memory-less iterative method by transforming it into a with-memory method. Enhanced acceleration of the convergence order is achieved by introducing two self-accelerating parameters computed using the Hermite interpolating polynomial. The corresponding R-order of convergence of the proposed uni- and bi-parametric with-memory methods is increased from 8 to 9 and 10, respectively. This increase in convergence order is accomplished without requiring additional function evaluations, making the with-memory method computationally efficient. The efficiency of our with-memory methods NWM9 and NWM10 increases from 1.6818 to 1.7320 and 1.7783, respectively. Numeric testing confirms the theoretical findings and emphasizes the superior efficacy of suggested methods when compared to some well-known methods in the existing literature.

]]>AppliedMath doi: 10.3390/appliedmath3040050

Authors: Alex Santana dos Santos Marcos Eduardo Valle

Max-C and min-D projection auto-associative fuzzy morphological memories (max-C and min-D PAFMMs) are two-layer feedforward fuzzy morphological neural networks designed to store and retrieve finite fuzzy sets. This paper addresses the main features of these auto-associative memories: unlimited absolute storage capacity, fast retrieval of stored items, few spurious memories, and excellent tolerance to either dilative or erosive noise. Particular attention is given to the so-called Zadeh&rsquo; PAFMM, which exhibits the most significant noise tolerance among the max-C and min-D PAFMMs besides performing no floating-point arithmetic operations. Computational experiments reveal that Zadeh&rsquo;s max-C PFAMM, combined with a noise masking strategy, yields a fast and robust classifier with a strong potential for face recognition tasks.

]]>AppliedMath doi: 10.3390/appliedmath3040049

Authors: Jochen Staudacher Tim Pollmann

Computing Shapley values for large cooperative games is an NP-hard problem. For practical applications, stochastic approximation via permutation sampling is widely used. In the context of machine learning applications of the Shapley value, the concept of antithetic sampling has become popular. The idea is to employ the reverse permutation of a sample in order to reduce variance and accelerate convergence of the algorithm. We study this approach for the Shapley and Banzhaf values, as well as for the Owen value which is a solution concept for games with precoalitions. We combine antithetic samples with established stratified sampling algorithms. Finally, we evaluate the performance of these algorithms on four different types of cooperative games.

]]>AppliedMath doi: 10.3390/appliedmath3040048

Authors: Isaac Elishakoff Nicolas Yvain

In this study, we tackle the subject of interval quadratic equations and we aim to accurately determine the root enclosures of quadratic equations, whose coefficients constitute interval variables. This study focuses on interval quadratic equations that contain only one coefficient considered as an interval variable. The four methods reviewed here in order to solve this problem are: (i) the method of classic interval analysis used by Elishakoff and Daphnis, (ii) the direct method based on minimizations and maximizations also used by the same authors, (iii) the method of quantifier elimination used by Ioakimidis, and (iv) the interval parametrization method suggested by Elishakoff and Miglis and again based on minimizations and maximizations. We will also compare the results yielded by all these methods by using the computer algebra system Mathematica for computer evaluations (including quantifier eliminations) in order to conclude which method would be the most efficient way to solve problems relevant to interval quadratic equations.

]]>AppliedMath doi: 10.3390/appliedmath3040047

Authors: Ivan Arraut Ka-I Lei

We review some general aspects about the Black&ndash;Scholes equation, which is used for predicting the fair price of an option inside the stock market. Our analysis includes the symmetry properties of the equation and its solutions. We use the Hamiltonian formulation for this purpose. Taking into account that the volatility inside the Black&ndash;Scholes equation is a parameter, we then introduce the Merton&ndash;Garman equation, where the volatility is stochastic, and then it can be perceived as a field. We then show how the Black&ndash;Scholes equation and the Merton&ndash;Garman one are locally equivalent by imposing a gauge symmetry under changes in the prices over the Black&ndash;Scholes equation. This demonstrates that the stochastic volatility emerges naturally from symmetry arguments. Finally, we analyze the role of the volatility on the decisions taken by the holders of the options when they use the solution of the Black&ndash;Scholes equation as a tool for making investment decisions.

]]>AppliedMath doi: 10.3390/appliedmath3040046

Authors: Kunle Adegoke Robert Frontczak Taras Goy

In this paper, we provide a first systematic treatment of binomial sum relations involving (generalized) Fibonacci and Lucas numbers. The paper introduces various classes of relations involving (generalized) Fibonacci and Lucas numbers and different kinds of binomial coefficients. We also present some novel relations between sums with two and three binomial coefficients. In the course of exploration, we rediscover a few isolated results existing in the literature, commonly presented as problem proposals.

]]>AppliedMath doi: 10.3390/appliedmath3040045

Authors: J. Leonel Rocha Sónia Carvalho Beatriz Coimbra

This paper introduces the mathematical formalization of two probabilistic procedures for susceptible-infected-recovered (SIR) and susceptible-infected-susceptible (SIS) infectious diseases epidemic models, over Erd&ouml;s-R&eacute;nyi contact networks. In our approach, we consider the epidemic threshold, for both models, defined by the inverse of the spectral radius of the associated adjacency matrices, which expresses the network topology. The epidemic threshold dynamics are analyzed, depending on the global dynamics of the network structure. The main contribution of this work is the relationship established between the epidemic threshold and the topological entropy of the Erd&ouml;s-R&eacute;nyi contact networks. In addition, a relationship between the basic reproduction number and the topological entropy is also stated. The trigger of the infectious state is studied, where the probability value of the stability of the infected state after the first instant, depending on the degree of the node in the seed set, is proven. Some numerical studies are included and illustrate the implementation of the probabilistic procedures introduced, complementing the discussion on the choice of the seed set.

]]>AppliedMath doi: 10.3390/appliedmath3040044

Authors: Sara Mollaeivaneghi Allan Santos Florian Steinke

For linear optimization problems with a parametric objective, so-called parametric linear programs (PLP), we show that the optimal decision values are, under few technical restrictions, unimodal functions of the parameter, at least in the two-degrees-of-freedom case. Assuming that the parameter is random and follows a known probability distribution, this allows for an efficient algorithm to determe the quantiles of linear combinations of the optimal decisions. The novel results are demonstrated with probabilistic economic dispatch. For an example setup with uncertain fuel costs, quantiles of the resulting inter-regional power flows are computed. The approach is compared against Monte Carlo and piecewise computation techniques, proving significantly reduced computation times for the novel procedure. This holds especially when the feasible set is complex and/or extreme quantiles are desired. This work is limited to problems with two effective degrees of freedom and a one-dimensional uncertainty. Future extensions to higher dimensions could yield a key tool for the analysis of probabilistic PLPs and, specifically, risk management in energy systems.

]]>AppliedMath doi: 10.3390/appliedmath3040043

Authors: Alexander Uzhinskiy

According to the Food and Agriculture Organization, the world&rsquo;s food production needs to increase by 70 percent by 2050 to feed the growing population. However, the EU agricultural workforce has declined by 35% over the last decade, and 54% of agriculture companies have cited a shortage of staff as their main challenge. These factors, among others, have led to an increased interest in advanced technologies in agriculture, such as IoT, sensors, robots, unmanned aerial vehicles (UAVs), digitalization, and artificial intelligence (AI). Artificial intelligence and machine learning have proven valuable for many agriculture tasks, including problem detection, crop health monitoring, yield prediction, price forecasting, yield mapping, pesticide, and fertilizer usage optimization. In this scoping mini review, scientific achievements regarding the main directions of agricultural technologies will be explored. Successful commercial companies, both in the Russian and international markets, that have effectively applied these technologies will be highlighted. Additionally, a concise overview of various AI approaches will be presented, and our firsthand experience in this field will be shared.

]]>AppliedMath doi: 10.3390/appliedmath3040042

Authors: Daniel A. Griffith

Two linear algebra problems implore a solution to them, creating the themes pursued in this paper. The first problem interfaces with graph theory via binary 0-1 adjacency matrices and their Laplacian counterparts. More contemporary spatial statistics/econometrics applications motivate the second problem, which embodies approximating the eigenvalues of massively large versions of these two aforementioned matrices. The proposed solutions outlined in this paper essentially are a reformulated multiple linear regression analysis for the first problem and a matrix inertia refinement adapted to existing work for the second problem.

]]>AppliedMath doi: 10.3390/appliedmath3040041

Authors: Yiqiao Wang Guanghong Ding Wei Yao

Based on the Hodgkin&ndash;Huxley theory, this paper establishes several nonlinear system models, analyzes the models&rsquo; stability, and studies the conditions for repetitive discharge of neuronal membrane potential. Our dynamic analysis showed that the main channel currents (the fast transient sodium current, the potassium delayed rectifier current, and the fixed leak current) of a neuron determine its dynamic properties and that the GHK formula will greatly widen the stimulation current range of the repetitive discharge condition compared with the Nernst equation. The model including the change in ion concentration will lead to spreading depression (SD)-like depolarization, and the inclusion of a Na-K pump will weaken the current stimulation effect by decreasing the extracellular K accumulation. The results indicate that the Hodgkin&ndash;Huxley model is suitable for describing the response to initial stimuli, but due to changes in ion concentration, it is not suitable for describing the response to long-term stimuli.

]]>AppliedMath doi: 10.3390/appliedmath3040040

Authors: Muhsin Tamturk Marco Carenzo

In this study, we design an algorithm to work on gate-based quantum computers. Based on the algorithm, we construct a quantum circuit that represents the surplus process of a cedant under a reinsurance agreement. This circuit takes into account a variety of factors: initial reserve, insurance premium, reinsurance premium, and specific amounts related to claims, retention, and deductibles for two different non-proportional reinsurance contracts. Additionally, we demonstrate how to perturb the actuarial stochastic process using Hadamard gates to account for unpredictable damage. We conclude by presenting graphs and numerical results to validate our capital modelling approach.

]]>AppliedMath doi: 10.3390/appliedmath3040039

Authors: Aghalaya S. Vatsala Govinda Pageni

Computing the solution of the Caputo fractional differential equation plays an important role in using the order of the fractional derivative as a parameter to enhance the model. In this work, we developed a power series solution method to solve a linear Caputo fractional differential equation of the order q,0&lt;q&lt;1, and this solution matches with the integer solution for q=1. In addition, we also developed a series solution method for a linear sequential Caputo fractional differential equation with constant coefficients of order 2q, which is sequential for order q with Caputo fractional initial conditions. The advantage of our method is that the fractional order q can be used as a parameter to enhance the mathematical model, compared with the integer model. The methods developed here, namely, the series solution method for solving Caputo fractional differential equations of constant coefficients, can be extended to Caputo sequential differential equation with variable coefficients, such as fractional Bessel&rsquo;s equation with fractional initial conditions.

]]>AppliedMath doi: 10.3390/appliedmath3040038

Authors: Robert Gardner Matthew Gladin

Motivated by results on the location of the zeros of a complex polynomial with monotonicity conditions on the coefficients (such as the classical Enestr&ouml;m&ndash;Kakeya theorem and its recent generalizations), we impose similar conditions and give bounds on the number of zeros in certain regions. We do so by introducing a reversal in monotonicity conditions on the real and imaginary parts of the coefficients and also on their moduli. The conditions imposed are less restrictive than many of those in the current literature and hence apply to polynomials not covered by previous results. The results presented naturally apply to certain classes of lacunary polynomials. In particular, the results apply to certain polynomials with two gaps in their coefficients.

]]>AppliedMath doi: 10.3390/appliedmath3040037

Authors: Oluwatosin Babasola Evans Otieno Omondi Kayode Oshinubi Nancy Matendechere Imbusi

Mathematical models have been of great importance in various fields, especially for understanding the dynamical behaviour of biosystems. Several models, based on classical ordinary differential equations, delay differential equations, and stochastic processes are commonly employed to gain insights into these systems. However, there is potential to extend such models further by combining the features from the classical approaches. This work investigates stochastic delay differential equations (SDDEs)-based models to understand the behaviour of biosystems. Numerical techniques for solving these models that demonstrate a more robust representation of real-life scenarios are presented. Additionally, quantitative roles of delay and noise to gain a deeper understanding of their influence on the system&rsquo;s overall behaviour are analysed. Subsequently, numerical simulations that illustrate the model&rsquo;s robustness are provided and the results suggest that SDDEs provide a more comprehensive representation of many biological systems, effectively accounting for the uncertainties that arise in real-life situations.

]]>AppliedMath doi: 10.3390/appliedmath3030036

Authors: Edoardo Ballico

Let X be a smooth projective variety and f:X&rarr;Pr a morphism birational onto its image. We define the Terracini loci of the map f. Most results are only for the case dimX=1. With this new and more flexible definition, it is possible to prove strong nonemptiness results with the full classification of all exceptional cases. We also consider Terracini loci with restricted support (solutions not intersecting a closed set B&#8842;X or solutions containing a prescribed p&isin;X). Our definitions work both for the Zariski and the euclidean topology and we suggest extensions to the case of real varieties. We also define Terracini loci for joins of two or more subvarieties of the same projective space. The proofs use algebro-geometric tools.

]]>AppliedMath doi: 10.3390/appliedmath3030035

Authors: Jose Pablo Rodriguez David F. Muñoz

The Mexico City Metrobus is one of the most popular forms of public transportation inside the city, and since its opening in 2005, it has become a vital piece of infrastructure for the city; this is why the optimal functioning of the system is of key importance to Mexico City, as it plays a crucial role in moving millions of passengers every day. This paper presents a model to simulate Line 1 of the Mexico City Metrobus, which can be adapted to simulate other bus rapid transit (BRT) systems. We give a detailed description of the model development so that the reader can replicate our model. We developed various response variables in order to evaluate the system&rsquo;s performance, which focused on passenger satisfaction and measured the maximum occupancy that a passenger experiences inside the buses, as well as the time that he spends in the queues at the stations. The results of the experiments show that it is possible to increase passenger satisfaction by considering different combinations of routes while maintaining the same fuel consumption. It was shown that, by considering an appropriate combination of routes, the average passenger satisfaction could surpass the satisfaction levels obtained by a 10% increase in total fuel consumption.

]]>AppliedMath doi: 10.3390/appliedmath3030034

Authors: Nicola Cufaro Petroni

In this article, some prescriptions to define a distribution on the set Q0 of all rational numbers in [0,1] are outlined. We explored a few properties of these distributions and the possibility of making these rational numbers asymptotically equiprobable in a suitable sense. In particular, it will be shown that in the said limit&mdash;albeit no absolutely continuous uniform distribution can be properly defined in Q0&mdash;the probability allotted to every single q&isin;Q0 asymptotically vanishes, while that of the subset of Q0 falling in an interval [a,b]&sube;Q0 goes to b&minus;a. We finally present some hints to complete sequencing without repeating the numbers in Q0 as a prerequisite to laying down more distributions on it.

]]>AppliedMath doi: 10.3390/appliedmath3030033

Authors: Ayokunle J. Tadema Micheal O. Ogundiran

This paper is concerned with the existence of solutions of a class of Cauchy problems for hyperbolic partial fractional differential inclusions (HPFD) involving the Caputo fractional derivative with an impulse whose right hand side is convex and non-convex valued. Our results are achieved within the framework of the nonlinear alternative of Leray-Schauder type and contraction multivalued maps. A detailed example was provided to support the theorem.

]]>AppliedMath doi: 10.3390/appliedmath3030032

Authors: Fateh Mohamed Ali Adhnouss Husam M. Ali El-Asfour Kenneth McIsaac Idris El-Feghi

Artificial Intelligence (AI) systems are increasingly being deployed in decentralized environments where they interact with other AI systems and humans. In these environments, each participant may have different ways of expressing the same semantics, leading to challenges in communication and collaboration. To address these challenges, this paper presents a novel hybrid model for shared conceptualization in decentralized AI systems. This model integrates ontology, epistemology, and epistemic logic, providing a formal framework for representing and reasoning about shared conceptualization. It captures both the intensional and extensional components of the conceptualization structure and incorporates epistemic logic to capture knowledge and belief relationships between agents. The model&rsquo;s unique contribution lies in its ability to handle different perspectives and beliefs, making it particularly suitable for decentralized environments. To demonstrate the model&rsquo;s practical application and effectiveness, it is applied to a scenario in the healthcare sector. The results show that the model has the potential to improve AI system performance in a decentralized context by enabling efficient communication and collaboration among agents. This study fills a gap in the literature concerning the representation of shared conceptualization in decentralized environments and provides a foundation for future research in this area.

]]>AppliedMath doi: 10.3390/appliedmath3030031

Authors: David Fernando Muñoz

When there is uncertainty in the value of parameters of the input random components of a stochastic simulation model, two-level nested simulation algorithms are used to estimate the expectation of performance variables of interest. In the outer level of the algorithm n observations are generated for the parameters, and in the inner level m observations of the simulation model are generated with the values of parameters fixed at the values generated in the outer level. In this article, we consider the case in which the observations at both levels of the algorithm are independent and show how the variance of the observations can be decomposed into the sum of a parametric variance and a stochastic variance. Next, we derive central limit theorems that allow us to compute asymptotic confidence intervals to assess the accuracy of the simulation-based estimators for the point forecast and the variance components. Under this framework, we derive analytical expressions for the point forecast and the variance components of a Bayesian model to forecast sporadic demand, and we use these expressions to illustrate the validity of our theoretical results by performing simulation experiments with this forecast model. We found that, given a fixed number of total observations nm, the choice of only one replication in the inner level (m=1) is recommended to obtain a more accurate estimator for the expectation of a performance variable.

]]>AppliedMath doi: 10.3390/appliedmath3030030

Authors: Luis A. Moncayo-Martínez Elias H. Arias-Nava

The simple assembly line balancing (SALB) problem is a significant challenge faced by industries across various sectors aiming to optimise production line efficiency and resource allocation. One important issue when the decision-maker balances a line is how to keep the cycle time under a given time across all cells, even though there is variability in some parameters. When there are stochastic elements, some approaches use constraint relaxation, intervals for the stochastic parameters, and fuzzy numbers. In this paper, a three-part algorithm is proposed that first solves the balancing problem without considering stochastic parameters; then, using simulation, it measures the effect of some parameters (in this case, the inter-arrival time, processing times, speed of the material handling system which is manually performed by the workers in the cell, and the number of workers who perform the tasks on the machines); finally, the add-on OptQuest in SIMIO solves an optimisation problem to constrain the cycle time using the stochastic parameters as decision variables. A Gearbox instance from literature is solved with 15 tasks and 14 precedence rules to test the proposed approach. The deterministic balancing problem is solved optimally using the open solver GLPK and the Pyomo programming language, and, with simulation, the proposed algorithm keeps the cycle time less than or equal to 70 s in the presence of variability and deterministic inter-arrival time. Meanwhile, with stochastic inter-arrival time, the maximum cell cycle is 72.04 s. The reader can download the source code and the simulation models from the GitHub page of the authors.

]]>AppliedMath doi: 10.3390/appliedmath3030029

Authors: Sangeeta Yadav

We propose a Quantum Neural Network (QNN) for predicting stabilization parameter for solving Singularly Perturbed Partial Differential Equations (SPDE) using the Streamline Upwind Petrov Galerkin (SUPG) stabilization technique. SPDE-Q-Net, a QNN, is proposed for approximating an optimal value of the stabilization parameter for SUPG for 2-dimensional convection-diffusion problems. Our motivation for this work stems from the recent progress made in quantum computing and the striking similarities observed between neural networks and quantum circuits. Just like how weight parameters are adjusted in traditional neural networks, the parameters of the quantum circuit, specifically the qubits&rsquo; degrees of freedom, can be fine-tuned to learn a nonlinear function. The performance of SPDE-Q-Net is found to be at par with SPDE-Net, a traditional neural network-based technique for stabilization parameter prediction in terms of the numerical error in the solution. Also, SPDE-Q-Net is found to be faster than SPDE-Net, which projects the future benefits which can be earned from the speed-up capabilities of quantum computing.

]]>AppliedMath doi: 10.3390/appliedmath3030028

Authors: Jianying Zhang

As a class of non-Newtonian fluids with yield stresses, Bingham fluids possess both solid and liquid phases separated by implicitly defined non-physical yield surfaces, which makes the standard numerical discretization challenging. The variational reformulation established by Duvaut and Lions, coupled with an augmented Lagrange method (ALM), brings about a finite element approach, whereas the inevitable local mesh refinement and preconditioning of the resulting large-scaled ill-conditioned linear system can be involved. Inspired by the mesh-free feature and architecture flexibility of physics-informed neural networks (PINNs), an ALM-PINN approach to steady-state Bingham fluid flow simulation, with dynamically adaptable weights, is developed and analyzed in this work. The PINN setting enables not only a pointwise ALM formulation but also the learning of families of (physical) parameter-dependent numerical solutions through one training process, and the incorporation of ALM into a PINN induces a more feasible loss function for deep learning. Numerical results obtained via the ALM-PINN training on one- and two-dimensional benchmark models are presented to validate the proposed scheme. The efficacy and limitations of the relevant loss formulation and optimization algorithms are also discussed to motivate some directions for future research.

]]>AppliedMath doi: 10.3390/appliedmath3030027

Authors: Polychronis Manousopoulos Vasileios Drakopoulos Efstathios Polyzos

Time series of financial data are both frequent and important in everyday practice. Numerous applications are based, for example, on time series of asset prices or market indices. In this article, the application of fractal interpolation functions in modelling financial time series is examined. Our motivation stems from the fact that financial time series often present fluctuations or abrupt changes which the fractal interpolants can inherently model. The results indicate that the use of fractal interpolation in financial applications is promising.

]]>AppliedMath doi: 10.3390/appliedmath3020026

Authors: Emilio Matricciani

We have studied how the readability of a text can change in translation by considering Matthew&rsquo;s Gospel, written in Greek, translated into Latin and 35 modern languages. We have found that the deep-language parameters CP (characters per word), PF (words per sentence), IP (words per interpunctions), MF (interpunctions per sentence) and a universal readability index GU&nbsp; of each translation are so diverse from language to language, and even within a given language for which there are many versions of Matthew&mdash;such as in English and Spanish&mdash;that the resulting texts mathematically seem to be diverse. The several tens of versions of Matthew&rsquo;s Gospel studied appear to address very diverse audiences. If a reader could understand all of them well, he/she would have the impression of reading texts written by diverse authors, although all of them tell the same story.

]]>AppliedMath doi: 10.3390/appliedmath3020025

Authors: Abdolreza Aghajanpour Seyedalireza Khatibi

This research employs computational methods to analyze the velocity and mixture fraction distributions of a non-reacting Propane jet flow that is discharged into parallel co-flowing air under iso-thermal conditions. This study includes a comparison between the numerical results and experimental results obtained from the Sandia Laboratory (USA). The objective is to improve the understanding of flow structure and mixing mechanisms in situations where there is no involvement of chemical reactions or heat transfer. In this experiment, the Realizable k-&epsilon; eddy viscosity turbulence model with two equations was utilized to simulate turbulent flow on a nearly 2D plane (specifically, a 5-degree partition of the experimental cylinder domain). This was achieved using OpenFOAM open-source software and swak4Foam utility, with the reactingFoam solver being manipulated carefully. The selection of this turbulence model was based on its superior predictive capability for the spreading rate of both planar and round jets, as compared to other variants of the k-&epsilon; models. Numerical axial and radial profiles of different parameters were obtained for a mesh that is independent of the grid (mesh B). These profiles were then compared with experimental data to assess the accuracy of the numerical model. The parameters that are being referred to are mean velocities, turbulence kinetic energy, mean mixture fraction, mixture fraction half radius (Lf), and the mass flux diagram. The validity of the assumption that w&#2032; = v&#2032; for the determination of turbulence kinetic energy, k, seems to hold true in situations where experimental data is deficient in w&#2032;. The simulations have successfully obtained the mean mixture fraction and its half radius, Lf, which is a measure of the jet&rsquo;s width. These values were determined from radial profiles taken at specific locations along the X-axis, including x/D = 0, 4, 15, 30, and 50. The accuracy of the mean vertical velocity fields in the X-direction (Umean) is noticeable, despite being less well-captured. The resolution of mean vertical velocity fields in the Y-direction (Vmean) is comparatively lower. The accuracy of turbulence kinetic energy (k) is moderate when it is within the range of Umean and Vmean. The absence of empirical data for absolute pressure (p) is compensated by the provision of numerical pressure contours.

]]>AppliedMath doi: 10.3390/appliedmath3020024

Authors: Abdulaziz D. Alhaidari Abdallah Laradji

An algebraic system is introduced which is very useful for performing scattering calculations in quantum field theory. It is the set of all real numbers greater than or equal to &minus;m2 with parity designation and a special rule for addition and subtraction, where m is the rest mass of the scattered particle.

]]>AppliedMath doi: 10.3390/appliedmath3020023

Authors: Richard D. Gill

We show how both smaller and more reliable p-values can be computed in Bell-type experiments by using statistical deviations from no-signalling equalities to reduce statistical noise in the estimation of Bell&rsquo;s S or Eberhard&rsquo;s J. Further improvement was obtained by using the Wilks likelihood ratio test based on the four tetranomially distributed vectors of counts of the four different outcome combinations, one 4-vector for each of the four setting combinations. The methodology was illustrated by application to the loophole-free Bell experiments of 2015 and 2016 performed in Delft and Munich, at NIST, and in Vienna, respectively, and also to the earlier (1998) Innsbruck experiment of Weihs et al. and the recent (2022) Munich experiment of Zhang et al., which investigates the use of a loophole-free Bell experiment as part of a protocol for device-independent quantum key distribution (DIQKD).

]]>AppliedMath doi: 10.3390/appliedmath3020022

Authors: Md Easin Hasan Fahad Mostafa Md S. Hossain Jonathon Loftin

Hepatocellular carcinoma (HCC) is the primary liver cancer that occurs the most frequently. The risk of developing HCC is highest in those with chronic liver diseases, such as cirrhosis brought on by hepatitis B or C infection and the most common type of liver cancer. Knowledge-based interpretations are essential for understanding the HCC microarray dataset due to its nature, which includes high dimensions and hidden biological information in genes. When analyzing gene expression data with many genes and few samples, the main problem is to separate disease-related information from a vast quantity of redundant gene expression data and their noise. Clinicians are interested in identifying the specific genes responsible for HCC in individual patients. These responsible genes may differ between patients, leading to variability in gene selection. Moreover, ML approaches, such as classification algorithms, are similar to black boxes, and it is important to interpret the ML model outcomes. In this paper, we use a reliable pipeline to determine important genes for discovering HCC from microarray analysis. We eliminate redundant and unnecessary genes through gene selection using principal component analysis (PCA). Moreover, we detect responsible genes with the random forest algorithm through variable importance ranking calculated from the Gini index. Classification algorithms, such as random forest (RF), na&iuml;ve Bayes classifier (NBC), logistic regression, and k-nearest neighbor (kNN) are used to classify HCC from responsible genes. However, classification algorithms produce outcomes based on selected genes for a large group of patients rather than for specific patients. Thus, we apply the local interpretable model-agnostic explanations (LIME) method to uncover the AI-generated forecasts as well as recommendations for patient-specific responsible genes. Moreover, we show our pathway analysis and a dendrogram of the pathway through hierarchical clustering of the responsible genes. There are 16 responsible genes found using the Gini index, and CCT3 and KPNA2 show the highest mean decrease in Gini values. Among four classification algorithms, random forest showed 96.53% accuracy with a precision of 97.30%. Five-fold cross-validation was used in order to collect multiple estimates and assess the variability for the RF model with a mean ROC of 0.95&plusmn;0.2. LIME outcomes were interpreted for two random patients with positive and negative effects. Therefore, we identified 16 responsible genes that can be used to improve HCC diagnosis or treatment. The proposed framework using machine-learning-classification algorithms with the LIME method can be applied to find responsible genes to diagnose and treat HCC patients.

]]>AppliedMath doi: 10.3390/appliedmath3020021

Authors: Fabio Silva Botelho

In the first part of this article, we present a new proof for Korn&rsquo;s inequality in an n-dimensional context. The results are based on standard tools of real and functional analysis. For the final result, the standard Poincar&eacute; inequality plays a fundamental role. In the second text part, we develop a global existence result for a non-linear model of plates. We address a rather general type of boundary conditions and the novelty here is the more relaxed restrictions concerning the external load magnitude.

]]>AppliedMath doi: 10.3390/appliedmath3020020

Authors: Sanjar M. Abrarov Rehan Siddiqui Rajinder Kumar Jagpal Brendan M. Quine

In this work, we derive a generalized series expansion of the acrtangent function by using the enhanced midpoint integration (EMI). Algorithmic implementation of the generalized series expansion utilizes a two-step iteration without surd or complex numbers. The computational test we performed reveals that such a generalization improves the accuracy in computation of the arctangent function by many orders of magnitude with increasing integer M, associated with subintervals in the EMI formula. The generalized series expansion may be promising for practical applications. It may be particularly useful in practical tasks, where extensive computations with arbitrary precision floating points are needed. The algorithmic implementation of the generalized series expansion of the arctangent function shows a rapid convergence rate in the computation of digits of &pi; in the Machin-like formulas.

]]>AppliedMath doi: 10.3390/appliedmath3020019

Authors: Roy M. Howard

Based on the geometry of a radial function, a sequence of approximations for arcsine, arccosine and arctangent are detailed. The approximations for arcsine and arccosine are sharp at the points zero and one. Convergence of the approximations is proved and the convergence is significantly better than Taylor series approximations for arguments approaching one. The established approximations can be utilized as the basis for Newton-Raphson iteration and analytical approximations, of modest complexity, and with relative error bounds of the order of 10&minus;16, and lower, can be defined. Applications of the approximations include: first, upper and lower bounded functions, of arbitrary accuracy, for arcsine, arccosine and arctangent. Second, approximations with significantly higher accuracy based on the upper or lower bounded approximations. Third, approximations for the square of arcsine with better convergence than well established series for this function. Fourth, approximations to arccosine and arcsine, to even order powers, with relative errors that are significantly lower than published approximations. Fifth, approximations for the inverse tangent integral function and several unknown integrals.

]]>AppliedMath doi: 10.3390/appliedmath3020018

Authors: Aurora Poggi Luca Di Persio Matthias Ehrhardt

Our research involves analyzing the latest models used for electricity price forecasting, which include both traditional inferential statistical methods and newer deep learning techniques. Through our analysis of historical data and the use of multiple weekday dummies, we have proposed an innovative solution for forecasting electricity spot prices. This solution involves breaking down the spot price series into two components: a seasonal trend component and a stochastic component. By utilizing this approach, we are able to provide highly accurate predictions for all considered time frames.

]]>AppliedMath doi: 10.3390/appliedmath3020017

Authors: Mitsuhiro Miyazaki

Let K be a field. In this paper, we construct a sequence of Cohen&ndash;Macaulay standard graded K-domains whose h-vectors are non-flawless and have exponentially deep flaws.

]]>AppliedMath doi: 10.3390/appliedmath3020016

Authors: Hongli Zhou Xiao Tang Rongle Zhao

In interval-valued three-way decision, the reflection of decision-makers&rsquo; preference under the full consideration of interval-valued characteristics is particularly important. In this paper, we propose an interval-valued three-way decision model based on the cumulative prospect theory. First, by means of the interval distance measurement method, the loss function and the gain function are constructed to reflect the differences of interval radius and expectation simultaneously. Second, combined with the reference point, the prospect value function is utilized to reflect decision-makers&rsquo; different risk preferences for gains and losses. Third, the calculation method of cumulative prospect value for taking action is given through the transformation of the prospect value function and cumulative weight function. Then, the new decision rules are deduced based on the principle of maximizing the cumulative prospect value. Finally, in order to verify the effectiveness and feasibility of the algorithm, the prospect value for decision-making and threshold changes are analyzed under different risk attitudes and different radii of the interval-valued decision model. In addition, compared with the interval-valued decision rough set model, our method in this paper has better decision prospects.

]]>AppliedMath doi: 10.3390/appliedmath3020015

Authors: Ivie Stein Md Nurul Raihen

In this work, we studied convergence rates using quotient convergence factors and root convergence factors, as described by Ortega and Rheinboldt, for Hestenes&rsquo; Gram&ndash;Schmidt conjugate direction method without derivatives. We performed computations in order to make a comparison between this conjugate direction method, for minimizing a nonquadratic function f, and Newton&rsquo;s method, for solving &nabla;f=0. Our primary purpose was to implement Hestenes&rsquo; CGS method with no derivatives and determine convergence rates.

]]>AppliedMath doi: 10.3390/appliedmath3010014

Authors: Nasrin Shabani Amin Beheshti Helia Farhood Matt Bower Michael Garrett Hamid Alinejad-Rokny

Numerous studies have established a correlation between creativity and intrinsic motivation to learn, with creativity defined as the process of generating original and valuable ideas, often by integrating perspectives from different fields. The field of educational technology has shown a growing interest in leveraging technology to promote creativity in the classroom, with several studies demonstrating the positive impact of creativity on learning outcomes. However, mining creative thinking patterns from educational data remains a challenging task, even with the proliferation of research on adaptive technology for education. This paper presents an initial effort towards formalizing educational knowledge by developing a domain-specific Knowledge Base that identifies key concepts, facts, and assumptions essential for identifying creativity patterns. Our proposed pipeline involves modeling raw educational data, such as assessments and class activities, as a graph to facilitate the contextualization of knowledge. We then leverage a rule-based approach to enable the mining of creative thinking patterns from the contextualized data and knowledge graph. To validate our approach, we evaluate it on real-world datasets and demonstrate how the proposed pipeline can enable instructors to gain insights into students&rsquo; creative thinking patterns from their activities and assessment tasks.

]]>AppliedMath doi: 10.3390/appliedmath3010013

Authors: Radhakumari Maya Muhammed Rasheed Irshad Muhammed Ahammed Christophe Chesneau

In this research work, a new three-parameter lifetime distribution is introduced and studied. It is called the Harris extended Bilal distribution due to its construction from a mixture of the famous Bilal and Harris distributions, resulting from a branching process. The basic properties, such as the moment generating function, moments, quantile function, and R&eacute;nyi entropy, are discussed. We show that the hazard rate function has ideal features for modeling increasing, upside-down bathtub, and roller-coaster data sets. In a second part, the Harris extended Bilal model is investigated from a statistical viewpoint. The maximum likelihood estimation is used to estimate the parameters, and a simulation study is carried out. The flexibility of the proposed model in a hydrological data analysis scenario is demonstrated using two practical data sets and compared with important competing models. After that, we establish an acceptance sampling plan that takes advantage of all of the features of the Harris extended Bilal model. The operating characteristic values, the minimum sample size that corresponds to the maximum possible defects, and the minimum ratios of lifetime associated with the producer&rsquo;s risk are discussed.

]]>AppliedMath doi: 10.3390/appliedmath3010012

Authors: Jasmine Renee Evans Asamoah Nkwanta

The leftmost column entries of RNA arrays I and II count the RNA numbers that are related to RNA secondary structures from molecular biology. RNA secondary structures sometimes have mutations and wobble pairs. Mutations are random changes that occur in a structure, and wobble pairs are known as non-Watson&ndash;Crick base pairs. We used topics from RNA combinatorics and Riordan array theory to establish connections among combinatorial objects related to linear trees, lattice walks, and RNA arrays. In this paper, we establish interesting new explicit bijections (one-to-one correspondences) involving certain subclasses of linear trees, lattice walks, and RNA secondary structures. We provide an interesting generalized lattice walk interpretation of RNA array I. In addition, we provide a combinatorial interpretation of RNA array II as RNA secondary structures with n bases and k base-point mutations where &omega; of the structures contain wobble base pairs. We also establish an explicit bijection between RNA structures with mutations and wobble bases and a certain subclass of lattice walks.

]]>AppliedMath doi: 10.3390/appliedmath3010011

Authors: Wullianallur Raghupathi Viju Raghupathi Aditya Saharia

This research studies the occurrence of data breaches in healthcare provider settings regarding patient data. Using visual analytics and data visualization tools, we study the distribution of healthcare breaches by state. We review the main causes and types of breaches, as well as their impact on both providers and patients. The research shows a range of data breach victims. Network servers are the most popular location for common breaches, such as hacking and information technology (IT) incidents, unauthorized access, theft, loss, and improper disposal. We offer proactive recommendations to prepare for a breach. These include, but are not limited to, regulatory compliance, implementing policies and procedures, and monitoring network servers. Unfortunately, the results indicate that the probability of data breaches will continue to rise.

]]>AppliedMath doi: 10.3390/appliedmath3010010

Authors: Christophe Chesneau

Copula analysis was created to explain the dependence of two or more quantitative variables. Due to the need for in-depth data analysis involving complex variable relationships, there is always a need for new copula models with original features. As a modern example, for the analysis of circular or periodic data types, trigonometric copulas are particularly attractive and recommended. This is, however, an underexploited topic. In this article, we propose a new collection of eight trigonometric and hyperbolic copulas, four based on the sine function and the others on the tangent function, all derived from the construction of the famous Farlie&ndash;Gumbel&ndash;Morgenstern copula. In addition to their original trigonometric and hyperbolic functionalities, the proposed copulas have the feature of depending on three parameters with complementary roles: one is a dependence parameter; one is a shape parameter; and the last can be viewed as an angle parameter. In our main findings, for each of the eight copulas, we determine a wide range of admissible values for these parameters. Subsequently, the capabilities, features, and functions of the new copulas are thoroughly examined. The shapes of the main functions of some copulas are illustrated graphically. Theoretically, symmetry in general, stochastic dominance, quadrant dependence, tail dependence, Archimedean nature, correlation measures, and inference on the parameters are investigated. Some copula shapes are illustrated with the help of figures. On the other hand, some two-dimensional inequalities are established and may be of separate interest.

]]>AppliedMath doi: 10.3390/appliedmath3010009

Authors: Akihiro Nishiyama Shigenori Tanaka Jack A. Tuszynski

We show renormalization in Quantum Brain Dynamics (QBD) in&nbsp;3+1 dimensions, namely Quantum Electrodynamics with water rotational dipole fields. First, we introduce the Lagrangian density for QBD involving terms of water rotational dipole fields, photon fields and their interactions. Next, we show Feynman diagrams with 1-loop self-energy and vertex function in dipole coupling expansion in QBD. The counter-terms are derived from the coupling expansion of the water dipole moment. Our approach will be applied to numerical simulations of Kadanoff&ndash;Baym equations for water dipoles and photons to describe the breakdown of the rotational symmetry of dipoles, namely memory formation processes. It will also be extended to the renormalization group method for QBD with running parameters in multi-scales.

]]>AppliedMath doi: 10.3390/appliedmath3010008

Authors: Changlun Ye Xianbing Luo

A multilevel Monte Carlo (MLMC) method is applied to simulate a stochastic optimal problem based on the gradient projection method. In the numerical simulation of the stochastic optimal control problem, the approximation of expected value is involved, and the MLMC method is used to address it. The computational cost of the MLMC method and the convergence analysis of the MLMC gradient projection algorithm are presented. Two numerical examples are carried out to verify the effectiveness of our method.

]]>AppliedMath doi: 10.3390/appliedmath3010007

Authors: Quan Yuan Zhixin Yang

The eigenvalue bounds of interval matrices are often required in some mechanical and engineering fields. In this paper, we improve the theoretical results presented in a previous paper &ldquo;A property of eigenvalue bounds for a class of symmetric tridiagonal interval matrices&rdquo; and provide a fast algorithm to find the upper and lower bounds of the interval eigenvalues of a class of symmetric tridiagonal interval matrices.

]]>AppliedMath doi: 10.3390/appliedmath3010006

Authors: AppliedMath Editorial Office AppliedMath Editorial Office

High-quality academic publishing is built on rigorous peer review [...]

]]>AppliedMath doi: 10.3390/appliedmath3010005

Authors: Garri Davydyan

Different hypotheses of carcinogenesis have been proposed based on local genetic factors and physiologic mechanisms. It is assumed that changes in the metric invariants of a biologic system (BS) determine the general mechanisms of cancer development. Numerous pieces of data demonstrate the existence of three invariant feedback patterns of BS: negative feedback (NFB), positive feedback (PFB) and reciprocal links (RL). These base patterns represent basis elements of a Lie algebra sl(2,R) and an imaginary part of coquaternion. Considering coquaternion as a model of a functional core of a BS, in this work a new geometric approach has been introduced. Based on this approach, conditions of the system are identified with the points of three families of hypersurfaces in R42: hyperboloids of one sheet, hyperboloids of two sheets and double cones. The obtained results also demonstrated the correspondence of an indefinite metric of coquaternion quadratic form with negative and positive entropy contributions of the base elements to the energy level of the system. From that, it can be further concluded that the anabolic states of the system will correspond to the points of a hyperboloid of one sheet, whereas catabolic conditions correspond to the points of a hyperboloid of two sheets. Equilibrium states will lie in a double cone. Physiologically anabolic and catabolic states dominate intermittently oscillating around the equilibrium. Deterioration of base elements increases positive entropy and causes domination of catabolic states, which is the main metabolic determinant of cancer. Based on these observations and the geometric representation of a BS&rsquo;s behavior, it was shown that conditions related to cancer metabolic malfunction will have a tendency to remain inside the double cone.

]]>AppliedMath doi: 10.3390/appliedmath3010004

Authors: Alexander Robitzsch

Linking errors in item response models quantify the dependence on the chosen items in means, standard deviations, or other distribution parameters. The jackknife approach is frequently employed in the computation of the linking error. However, this jackknife linking error could be computationally tedious if many items were involved. In this article, we provide an analytical approximation of the jackknife linking error. The newly proposed approach turns out to be computationally much less demanding. Moreover, the new linking error approach performed satisfactorily for datasets with at least 20 items.

]]>AppliedMath doi: 10.3390/appliedmath3010003

Authors: Serban Raicu Dorinela Costescu Mihaela Popa

Queue systems are essential in the modelling of transport systems. Increasing requirements from the beneficiaries of logistic services have led to a broadening of offerings. Consequently, models need to consider transport entities with priorities being assigned in relation to the costs corresponding to different classes of customers and/or processes. Waiting lines and queue disciplines substantially affect queue system performance. This paper aims to identify a solution for decreasing the waiting time, the total time in the system, and, overall, the cost linked to queueing delays. The influence of queue discipline on the waiting time and the total time in the system is analysed for several cases: (i) service for priority classes at the same rate of service with and without interruptions, and (ii) service for several priority classes with different service rates. The presented analysis is appropriate for increasing the performance of services dedicated to freight for two priority classes. It demonstrates how priority service can increase system performance by reducing the time in the system for customers with high costs. In addition, in the considered settings, the total time in the system is reduced for all customers, which leads to resource savings for system infrastructures.

]]>AppliedMath doi: 10.3390/appliedmath3010002

Authors: Miriam Di Ianni

Graph dynamics for a node-labeled graph is a set of updating rules describing how the labels of each node in the graph change in time as a function of the global set of labels. The underpopulation rule is graph dynamics derived by simplifying the set of rules constituting the Game of Life. It is known that the number of label configurations met by a graph during the dynamic process defined by such rule is bounded by a polynomial in the size of the graph if the graph is undirected. As a consequence, predicting the labels evolution is an easy problem (i.e., a problem in P) in such a case. In this paper, the generalization of the underpopulation rule to signed and directed graphs is studied. It is here proved that the number of label configurations met by a graph during the dynamic process defined by any so generalized underpopulation rule is still bounded by a polynomial in the size of the graph if the graph is undirected and structurally balanced, while it is not bounded by any polynomial in the size of the graph if the graph is directed although unsigned unless P = PSpace.

]]>AppliedMath doi: 10.3390/appliedmath3010001

Authors: Yaxi Li Yue Kai

The main idea of this paper is to study the chaotic behavior of Zakharov&ndash;Kuznetsov equation with perturbation. By taking the traveling wave transformation, we transform the perturbed Zakharov&ndash;Kuznetsov equation with dual-power law and triple-power law nonlinearity into planar dynamic systems, and then analyze how the external perturbed terms affect the chaotic behavior. We emphasize here that there is no chaotic phenomenon for the non-perturbed ZK equation, thus it is only caused by the external perturbed terms.

]]>AppliedMath doi: 10.3390/appliedmath2040044

Authors: Stephanie Were Somtochukwu Godfrey Nnabuife Boyu Kuang

The current handling of gas associated with oil production poses an environmental risk. This gas is being flared off due to the technical and economic attractiveness of this option. As flared gases are mainly composed of methane, they have harmful greenhouse effects when released into the atmosphere. This work discusses the effectiveness of using this gas for enhanced oil recovery (EOR) purposes as an alternative to flaring. In this study, a micromodel was designed with properties similar to a sandstone rock with a porosity of 0.4, and computational fluid dynamics (CFD) techniques were applied to design an EOR system. Temperature effects were not considered in the study, and the simulation was run at atmospheric pressure. Five case studies were carried out with different interfacial tensions between the oil and gas (0.005 N/m, 0.017 N/m, and 0.034 N/m) and different injection rates for the gas (1 &times; 10&minus;3 m/s, 1 &times; 10&minus;4 m/s, and 1 &times; 10&minus;6 m/s). The model was compared with a laboratory experiment measuring immiscible gas flooding. Factors affecting oil recoveries, such as the interfacial tension between oil and gas, the viscosity, and the pressure, were studied in detail. The results showed that the surface tension between the oil and gas interphase was a limiting factor for maximum oil recovery. The lower surface tension recovered 33% of the original oil in place. The capillary pressure was higher than the pressure in the micromodel, which lowered the amount of oil that was displaced. The study showed the importance of pressure maintenance to increase oil recovery for immiscible gas floods. It is recommended that a wider set of interfacial tensions between oil and gas be tested to obtain a range at which oil recovery is maximum for EOR with flared gas.

]]>AppliedMath doi: 10.3390/appliedmath2040043

Authors: Sina Aghakhani Mohammad Sadra Rajabi

In general, customers are looking to receive their orders in the fastest time possible and to make purchases at a reasonable price. Consequently, the importance of having an optimal delivery time is increasingly evident these days. One of the structures that can meet the demand for large supply chains with numerous orders is the hierarchical integrated hub structure. Such a structure improves efficiency and reduces chain costs. To make logistics more cost-effective, hub-and-spoke networks are necessary as a means to achieve economies of scale. Many hub network design models only consider hub type but do not take into account the hub scale measured using freight volume. This paper proposes a multi-objective scheduling model for hierarchical hub structures (HHS), which is layered from top to bottom. In the third layer, the central hub takes factory products from decentralized hubs and sends them to other decentralized hubs to which customers are connected. In the second layer, non-central hubs are responsible for receiving products from the factory and transferring them to central hubs. These hubs are also responsible for receiving products from central hubs and sending them to customers. Lastly, the first layer contains factories responsible for producing products and providing for their customers. The factory uses the flexible flow-shop platform and structure to produce its products. The model&rsquo;s objective is to minimize transportation and production costs as well as product arrival times. To validate and evaluate the model, small instances have been solved and analyzed in detail with the weighted sum and &epsilon;-constraint method. Consequently, based on the mean ideal distance (MID) metric, two methods were compared for the designed instances.

]]>AppliedMath doi: 10.3390/appliedmath2040042

Authors: Dongyung Kim

Recently, a crowd crush accident occurred in Seoul. Mathematics and data science can contribute to understanding this incident and to avoiding future accidents. In this paper, I suggest an optimized monitoring methodology to avoid crowd crush accidents with scattered data by searching the global minimum of the minimax data or minsum data. These scattered data are the position data of cell phones with time t. Mathematically, I find an exact solution of the optimized monitoring region with the suggested methodology by using the minimal constraints. The methodology is verified and validated along with the efficiency.

]]>AppliedMath doi: 10.3390/appliedmath2040041

Authors: Ayan Bhattacharya

This paper obtains a measure-theoretic restriction that must be satisfied by a prior probability measure for posteriors to be computed in limited time. Specifically, it is shown that the prior must be factorizable. Factorizability is a set of independence conditions for events in the sample space that allows agents to calculate posteriors using only a subset of the dataset. The result has important implications for models in mathematical economics and finance that rely on a common prior. If one introduces the limited time restriction to Aumann&rsquo;s famous Agreeing to Disagree setup, one sees that checking for factorizability requires agents to have access to every event in the measure space, thus severely limiting the scope of the agreement result.

]]>AppliedMath doi: 10.3390/appliedmath2040040

Authors: Robert Gardner Matthew Gladin

The well-known Enestr&ouml;m&ndash;Kakeya Theorem states that, for P(z)=&sum;&#8467;=0na&#8467;z&#8467;, a polynomial of degree n with real coefficients satisfying 0&le;a0&le;a1&le;&#8943;&le;an, then all the zeros of P lie in |z|&le;1 in the complex plane. Motivated by recent results concerning an Enestr&ouml;m&ndash;Kakeya &ldquo;type&rdquo; condition on real coefficients, we give similar results with hypotheses concerning the real and imaginary parts of the coefficients and concerning the moduli of the coefficients. In this way, our results generalize the other recent results.

]]>AppliedMath doi: 10.3390/appliedmath2040039

Authors: Paul C. Arpin Mihail Popa Daniel B. Turner

The motions of nuclei in a molecule can be mathematically described by using normal modes of vibration, which form a complete orthonormal basis. Each normal mode describes oscillatory motion at a frequency determined by the momentum of the nuclei. Near equilibrium, it is common to apply the quantum harmonic-oscillator model, whose eigenfunctions intimately involve combinatorics. Each electronic state has distinct force constants; therefore, each normal-mode basis is distinct. Duschinsky proposed a linearized approximation to the transformation between the normal-mode bases of two electronic states using a rotation matrix. The rotation angles are typically obtained by using quantum-chemical computations or via gas-phase spectroscopy measurements. Quantifying the rotation angles in the condensed phase remains a challenge. Here, we apply a two-dimensional harmonic model that includes a Duschinsky rotation to condensed-phase femtosecond coherence spectra (FCS), which are created in transient&ndash;absorption spectroscopy measurements through impulsive excitation of coherent vibrational wavepackets. Using the 2D model, we simulate spectra to identify the signatures of Duschinsky rotation. The results suggest that peak multiplicities and asymmetries may be used to quantify the rotation angle, which is a key advance in condensed-phase molecular spectroscopy.

]]>AppliedMath doi: 10.3390/appliedmath2040038

Authors: Débora N. Diniz Breno N. S. Keller Mariana T. Rezende Andrea G. C. Bianchi Claudia M. Carneiro Renata R. e R. Oliveira Eduardo J. S. Luz Daniela M. Ushizima Fátima N. S. de Medeiros Marcone J. F. Souza

Screening of Pap smear images continues to depend upon cytopathologists&rsquo; manual scrutiny, and the results are highly influenced by professional experience, leading to varying degrees of cell classification inaccuracies. In order to improve the quality of the Pap smear results, several efforts have been made to create software to automate and standardize the processing of medical images. In this work, we developed the CEA (Cytopathologist Eye Assistant), an easy-to-use tool to aid cytopathologists in performing their daily activities. In addition, the tool was tested by a group of cytopathologists, whose feedback indicates that CEA could be a valuable tool to be integrated into Pap smear image analysis routines. For the construction of the tool, we evaluate different YOLO configurations and classification approaches. The best combination of algorithms uses YOLOv5s as a detection algorithm and an ensemble of EfficientNets as a classification algorithm. This configuration achieved 0.726 precision, 0.906 recall, and 0.805 F1-score when considering individual cells. We also made an analysis to classify the image as a whole, in which case, the best configuration was the YOLOv5s to perform the detection and classification tasks, and it achieved 0.975 precision, 0.992 recall, 0.970 accuracy, and 0.983 F1-score.

]]>AppliedMath doi: 10.3390/appliedmath2040037

Authors: Evangelos Ioannidis Dimitrios Tsoumaris Dimitrios Ntemkas Iordanis Sarikeisoglou

ESG ratings are data-driven indices, focused on three key pillars (Environmental, Social, and Governance), which are used by investors in order to evaluate companies and countries, in terms of Sustainability. A reasonable question which arises is how these ratings are associated to each other. The research purpose of this work is to provide the first analysis of correlation networks, constructed from ESG ratings of selected economies. The networks are constructed based on Pearson correlation and analyzed in terms of some well-known tools from Network Science, namely: degree centrality of the nodes, degree centralization of the network, network density and network balance. We found that the Prevalence of Overweight and Life Expectancy are the most central ESG ratings, while unexpectedly, two of the most commonly used economic indicators, namely the GDP growth and Unemployment, are at the bottom of the list. China&rsquo;s ESG network has remarkably high positive and high negative centralization, which has strong implications on network&rsquo;s vulnerability and targeted controllability. Interestingly, if the sign of correlations is omitted, the above result cannot be captured. This is a clear example of why signed network analysis is needed. The most striking result of our analysis is that the ESG networks are extremely balanced, i.e. they are split into two anti-correlated groups of ESG ratings (nodes). It is impressive that USA&rsquo;s network achieves 97.9% balance, i.e. almost perfect structural split into two anti-correlated groups of nodes. This split of network structure may have strong implications on hedging risk, if we see ESG ratings as underlying assets for portfolio selection. Investing into anti-correlated assets, called as "hedge assets", can be useful to offset potential losses. Our future direction is to apply and extend the proposed signed network analysis to ESG ratings of corporate organizations, aiming to design optimal portfolios with desired balance between risk and return.

]]>AppliedMath doi: 10.3390/appliedmath2040036

Authors: Zhenkun Zhang Hongjian Lai

The cutwidth minimization problem consists of finding an arrangement of the vertices of a graph G on a line Pn with n=|V(G)| vertices in such a way that the maximum number of overlapping edges (i.e., the congestion) is minimized. A graph G with a cutwidth of k is k-cutwidth critical if every proper subgraph of G has a cutwidth less than k and G is homeomorphically minimal. In this paper, we first verified some structural properties of k-cutwidth critical unicyclic graphs with k&gt;1. We then mainly investigated the critical unicyclic graph set T with a cutwidth of four that contains fifty elements, and obtained a forbidden subgraph characterization of 3-cutwidth unicyclic graphs.

]]>AppliedMath doi: 10.3390/appliedmath2040035

Authors: Vasant Chavan

The aim of the present paper is to study and investigate the geometrical properties of a concircular curvature tensor on generalized Sasakian-space-forms. In this manner, we obtained results for &#981;-concircularly flat, &#981;-semisymmetric, locally concircularly symmetric and locally concircularly &#981;-symmetric generalized Sasakian-space-forms. Finally, we construct examples of the generalized Sasakian-space-forms to verify some results.

]]>AppliedMath doi: 10.3390/appliedmath2040034

Authors: Stanislav Yu. Lukashchuk

A nonlocally perturbed linear Schr&ouml;dinger equation with a small parameter was derived under the assumption of low-level fractionality by using one of the known general nonlocal wave equations with an infinite power-law memory. The problem of finding approximate symmetries for the equation is studied here. It has been shown that the perturbed Schr&ouml;dinger equation inherits all symmetries of the classical linear equation. It has also been proven that approximate symmetries corresponding to Galilean transformations and projective transformations of the unperturbed equation are nonlocal. In addition, a special class of nonlinear, nonlocally perturbed Schr&ouml;dinger equations that admits an approximate nonlocal extension of the Galilei group is derived. An example of constructing an approximately invariant solution for the linear equation using approximate scaling symmetry is presented.

]]>AppliedMath doi: 10.3390/appliedmath2040033

Authors: Feng Zhao Shao-Lun Huang

While non-linear activation functions play vital roles in artificial neural networks, it is generally unclear how the non-linearity can improve the quality of function approximations. In this paper, we present a theoretical framework to rigorously analyze the performance gain of using non-linear activation functions for a class of residual neural networks (ResNets). In particular, we show that when the input features for the ResNet are uniformly chosen and orthogonal to each other, using non-linear activation functions to generate the ResNet output averagely outperforms using linear activation functions, and the performance gain can be explicitly computed. Moreover, we show that when the activation functions are chosen as polynomials with the degree much less than the dimension of the input features, the optimal activation functions can be precisely expressed in the form of Hermite polynomials. This demonstrates the role of Hermite polynomials in function approximations of ResNets.

]]>AppliedMath doi: 10.3390/appliedmath2040032

Authors: Stefanos Samaras Christine Böckmann Christoph Ritter

Extracting information about the shape or size of non-spherical aerosol particles from limited optical radar data is a well-known inverse ill-posed problem. The purpose of the study is to figure out a robust and stable regularization method including an appropriate parameter choice rule to address the latter problem. First, we briefly review common regularization methods and investigate a new iterative family of generalized Runge&ndash;Kutta filter regularizers. Next, we model a spheroidal particle ensemble and test with it different regularization methods experimenting with artificial data pertaining to several atmospheric scenarios. We found that one method of the newly introduced generalized family combined with the L-curve method performs better compared to traditional methods.

]]>AppliedMath doi: 10.3390/appliedmath2040031

Authors: Richard A. Chechile

The retention of human memory is a process that can be understood from a hazard-function perspective. Hazard is the conditional probability of a state change at time t given that the state change did not yet occur. After reviewing the underlying mathematical results of hazard functions in general, there is an analysis of the hazard properties associated with nine theories of memory that emerged from psychological science. Five theories predict strictly monotonically decreasing hazard whereas the other four theories predict a peaked-shaped hazard function that rises initially to a peak and then decreases for longer time periods. Thus, the behavior of hazard shortly after the initial encoding is the critical difference among the theories. Several theorems provide a basis to explore hazard for the initial time period after encoding in terms of a more practical surrogate function that is linked to the behavior of the hazard function. Evidence for a peak-shaped hazard function is provided and a case is made for one particular psychological theory of memory that posits that memory encoding produces two redundant representations that have different hazard properties. One memory representation has increasing hazard while the other representation has decreasing hazard.

]]>AppliedMath doi: 10.3390/appliedmath2040030

Authors: Robert Vrabel

In this short paper, we study the problem of traversing a crossbar through a bent channel, which has been formulated as a nonlinear convex optimization problem. The result is a MATLAB code that we can use to compute the maximum length of the crossbar as a function of the width of the channel (its two parts) and the angle between them. In case they are perpendicular to each other, the result is expressed analytically and is closely related to the astroid curve (a hypocycloid with four cusps).

]]>AppliedMath doi: 10.3390/appliedmath2030029

Authors: Mikhail Sgibnev

We consider the inhomogeneous Wiener&ndash;Hopf equation whose kernel is a nonarithmetic probability distribution with positive mean. The inhomogeneous term behaves like a submultiplicative function. We establish asymptotic properties of the solution to which the successive approximations converge. These properties depend on the asymptotics of the submultiplicative function.

]]>AppliedMath doi: 10.3390/appliedmath2030028

Authors: Ted Gyle Lewis

A mathematical description of catastrophe in complex systems modeled as a network is presented with emphasis on network topology and its relationship to risk and resilience. We present mathematical formulas for computing risk, resilience, and likelihood of faults in nodes/links of network models of complex systems and illustrate the application of the formulas to simulation of catastrophic failure. This model is not related to nonlinear &ldquo;Catastrophe theory&rdquo; by Ren&eacute; Thom, E.C. Zeeman and others. Instead, we present a strictly probabilistic network model for estimating risk and resilience&mdash;two useful metrics used in practice. We propose a mathematical model of exceedance probability, risk, and resilience and show that these properties depend wholly on vulnerability, consequence, and properties of the network representation of the complex system. We use simulation of the network under simulated stress causing one or more nodes/links to fail, to extract properties of risk and resilience. In this paper two types of stress are considered: viral cascades and flow cascades. One unified definition of risk, MPL, is proposed, and three kinds of resilience illustrated&mdash;viral cascading, blocking node/link, and flow resilience. The principal contributions of this work are new equations for risk and resilience and measures of resilience based on vulnerability of individual nodes/links and network topology expressed in terms of spectral radius, bushy, and branchy metrics. We apply the model to a variety of networks&mdash;hypothetical and real&mdash;and show that network topology needs to be included in any definition of network risk and resilience. In addition, we show how simulations can identify likely future faults due to viral and flow cascades. Simulations of this nature are useful to the practitioner.

]]>AppliedMath doi: 10.3390/appliedmath2030027

Authors: M. Rodrigo D. Zulkarnaen

A general population model with variable carrying capacity consisting of a coupled system of nonlinear ordinary differential equations is proposed, and a procedure for obtaining analytical solutions for three broad classes of models is provided. A particular case is when the population and carrying capacity per capita growth rates are proportional. As an example, a generalised Thornley&ndash;France model is given. Further examples are given when the growth rates are not proportional. A criterion when inflexion may occur is also provided, and results of numerical simulations are presented.

]]>AppliedMath doi: 10.3390/appliedmath2030026

Authors: Edoardo Ballico

Let X&sub;Pr be an integral and non-degenerate variety. A &ldquo;cost-function&rdquo; (for the Zariski topology, the semialgebraic one, or the Euclidean one) is a semicontinuous function w:=[1,+&infin;)&cup;+&infin; such that w(a)=1 for a non-empty open subset of X. For any q&isin;Pr, the rank rX,w(q) of q with respect to (X,w) is the minimum of all &sum;a&isin;Sw(a), where S is a finite subset of X spanning q. We have rX,w(q)&lt;+&infin; for all q. We discuss this definition and classify extremal cases of pairs (X,q). We give upper bounds for all rX,w(q) (twice the generic rank) not depending on w. This notion is the generalization of the case in which the cost-function w is the constant function 1. In this case, the rank is a well-studied notion that covers the tensor rank of tensors of arbitrary formats (PARAFAC or CP decomposition) and the additive decomposition of forms. We also adapt to cost-functions the rank 1 decomposition of real tensors in which we allow pairs of complex conjugate rank 1 tensors.

]]>AppliedMath doi: 10.3390/appliedmath2030025

Authors: Frédéric Ouimet

In this paper, we develop local expansions for the ratio of the centered matrix-variate T density to the centered matrix-variate normal density with the same covariances. The approximations are used to derive upper bounds on several probability metrics (such as the total variation and Hellinger distance) between the corresponding induced measures. This work extends some previous results for the univariate Student distribution to the matrix-variate setting.

]]>