AppliedMath doi: 10.3390/appliedmath3020025

Authors: Abdolreza Aghajanpour Seyedalireza Khatibi

This research employs computational methods to analyze the velocity and mixture fraction distributions of a non-reacting Propane jet flow that is discharged into parallel co-flowing air under iso-thermal conditions. This study includes a comparison between the numerical results and experimental results obtained from the Sandia Laboratory (USA). The objective is to improve the understanding of flow structure and mixing mechanisms in situations where there is no involvement of chemical reactions or heat transfer. In this experiment, the Realizable k-&epsilon; eddy viscosity turbulence model with two equations was utilized to simulate turbulent flow on a nearly 2D plane (specifically, a 5-degree partition of the experimental cylinder domain). This was achieved using OpenFOAM open-source software and swak4Foam utility, with the reactingFoam solver being manipulated carefully. The selection of this turbulence model was based on its superior predictive capability for the spreading rate of both planar and round jets, as compared to other variants of the k-&epsilon; models. Numerical axial and radial profiles of different parameters were obtained for a mesh that is independent of the grid (mesh B). These profiles were then compared with experimental data to assess the accuracy of the numerical model. The parameters that are being referred to are mean velocities, turbulence kinetic energy, mean mixture fraction, mixture fraction half radius (Lf), and the mass flux diagram. The validity of the assumption that w&#2032; = v&#2032; for the determination of turbulence kinetic energy, k, seems to hold true in situations where experimental data is deficient in w&#2032;. The simulations have successfully obtained the mean mixture fraction and its half radius, Lf, which is a measure of the jet&rsquo;s width. These values were determined from radial profiles taken at specific locations along the X-axis, including x/D = 0, 4, 15, 30, and 50. The accuracy of the mean vertical velocity fields in the X-direction (Umean) is noticeable, despite being less well-captured. The resolution of mean vertical velocity fields in the Y-direction (Vmean) is comparatively lower. The accuracy of turbulence kinetic energy (k) is moderate when it is within the range of Umean and Vmean. The absence of empirical data for absolute pressure (p) is compensated by the provision of numerical pressure contours.

]]>AppliedMath doi: 10.3390/appliedmath3020024

Authors: Abdulaziz D. Alhaidari Abdallah Laradji

An algebraic system is introduced which is very useful for performing scattering calculations in quantum field theory. It is the set of all real numbers greater than or equal to &minus;m2 with parity designation and a special rule for addition and subtraction, where m is the rest mass of the scattered particle.

]]>AppliedMath doi: 10.3390/appliedmath3020023

Authors: Richard D. Gill

We show how both smaller and more reliable p-values can be computed in Bell-type experiments by using statistical deviations from no-signalling equalities to reduce statistical noise in the estimation of Bell&rsquo;s S or Eberhard&rsquo;s J. Further improvement was obtained by using the Wilks likelihood ratio test based on the four tetranomially distributed vectors of counts of the four different outcome combinations, one 4-vector for each of the four setting combinations. The methodology was illustrated by application to the loophole-free Bell experiments of 2015 and 2016 performed in Delft and Munich, at NIST, and in Vienna, respectively, and also to the earlier (1998) Innsbruck experiment of Weihs et al. and the recent (2022) Munich experiment of Zhang et al., which investigates the use of a loophole-free Bell experiment as part of a protocol for device-independent quantum key distribution (DIQKD).

]]>AppliedMath doi: 10.3390/appliedmath3020022

Authors: Md Easin Hasan Fahad Mostafa Md S. Hossain Jonathon Loftin

Hepatocellular carcinoma (HCC) is the primary liver cancer that occurs the most frequently. The risk of developing HCC is highest in those with chronic liver diseases, such as cirrhosis brought on by hepatitis B or C infection and the most common type of liver cancer. Knowledge-based interpretations are essential for understanding the HCC microarray dataset due to its nature, which includes high dimensions and hidden biological information in genes. When analyzing gene expression data with many genes and few samples, the main problem is to separate disease-related information from a vast quantity of redundant gene expression data and their noise. Clinicians are interested in identifying the specific genes responsible for HCC in individual patients. These responsible genes may differ between patients, leading to variability in gene selection. Moreover, ML approaches, such as classification algorithms, are similar to black boxes, and it is important to interpret the ML model outcomes. In this paper, we use a reliable pipeline to determine important genes for discovering HCC from microarray analysis. We eliminate redundant and unnecessary genes through gene selection using principal component analysis (PCA). Moreover, we detect responsible genes with the random forest algorithm through variable importance ranking calculated from the Gini index. Classification algorithms, such as random forest (RF), na&iuml;ve Bayes classifier (NBC), logistic regression, and k-nearest neighbor (kNN) are used to classify HCC from responsible genes. However, classification algorithms produce outcomes based on selected genes for a large group of patients rather than for specific patients. Thus, we apply the local interpretable model-agnostic explanations (LIME) method to uncover the AI-generated forecasts as well as recommendations for patient-specific responsible genes. Moreover, we show our pathway analysis and a dendrogram of the pathway through hierarchical clustering of the responsible genes. There are 16 responsible genes found using the Gini index, and CCT3 and KPNA2 show the highest mean decrease in Gini values. Among four classification algorithms, random forest showed 96.53% accuracy with a precision of 97.30%. Five-fold cross-validation was used in order to collect multiple estimates and assess the variability for the RF model with a mean ROC of 0.95&plusmn;0.2. LIME outcomes were interpreted for two random patients with positive and negative effects. Therefore, we identified 16 responsible genes that can be used to improve HCC diagnosis or treatment. The proposed framework using machine-learning-classification algorithms with the LIME method can be applied to find responsible genes to diagnose and treat HCC patients.

]]>AppliedMath doi: 10.3390/appliedmath3020021

Authors: Fabio Silva Botelho

In the first part of this article, we present a new proof for Korn&rsquo;s inequality in an n-dimensional context. The results are based on standard tools of real and functional analysis. For the final result, the standard Poincar&eacute; inequality plays a fundamental role. In the second text part, we develop a global existence result for a non-linear model of plates. We address a rather general type of boundary conditions and the novelty here is the more relaxed restrictions concerning the external load magnitude.

]]>AppliedMath doi: 10.3390/appliedmath3020020

Authors: Sanjar M. Abrarov Rehan Siddiqui Rajinder Kumar Jagpal Brendan M. Quine

In this work, we derive a generalized series expansion of the acrtangent function by using the enhanced midpoint integration (EMI). Algorithmic implementation of the generalized series expansion utilizes a two-step iteration without surd or complex numbers. The computational test we performed reveals that such a generalization improves the accuracy in computation of the arctangent function by many orders of magnitude with increasing integer M, associated with subintervals in the EMI formula. The generalized series expansion may be promising for practical applications. It may be particularly useful in practical tasks, where extensive computations with arbitrary precision floating points are needed. The algorithmic implementation of the generalized series expansion of the arctangent function shows a rapid convergence rate in the computation of digits of &pi; in the Machin-like formulas.

]]>AppliedMath doi: 10.3390/appliedmath3020019

Authors: Roy M. Howard

Based on the geometry of a radial function, a sequence of approximations for arcsine, arccosine and arctangent are detailed. The approximations for arcsine and arccosine are sharp at the points zero and one. Convergence of the approximations is proved and the convergence is significantly better than Taylor series approximations for arguments approaching one. The established approximations can be utilized as the basis for Newton-Raphson iteration and analytical approximations, of modest complexity, and with relative error bounds of the order of 10&minus;16, and lower, can be defined. Applications of the approximations include: first, upper and lower bounded functions, of arbitrary accuracy, for arcsine, arccosine and arctangent. Second, approximations with significantly higher accuracy based on the upper or lower bounded approximations. Third, approximations for the square of arcsine with better convergence than well established series for this function. Fourth, approximations to arccosine and arcsine, to even order powers, with relative errors that are significantly lower than published approximations. Fifth, approximations for the inverse tangent integral function and several unknown integrals.

]]>AppliedMath doi: 10.3390/appliedmath3020018

Authors: Aurora Poggi Luca Di Persio Matthias Ehrhardt

Our research involves analyzing the latest models used for electricity price forecasting, which include both traditional inferential statistical methods and newer deep learning techniques. Through our analysis of historical data and the use of multiple weekday dummies, we have proposed an innovative solution for forecasting electricity spot prices. This solution involves breaking down the spot price series into two components: a seasonal trend component and a stochastic component. By utilizing this approach, we are able to provide highly accurate predictions for all considered time frames.

]]>AppliedMath doi: 10.3390/appliedmath3020017

Authors: Mitsuhiro Miyazaki

Let K be a field. In this paper, we construct a sequence of Cohen&ndash;Macaulay standard graded K-domains whose h-vectors are non-flawless and have exponentially deep flaws.

]]>AppliedMath doi: 10.3390/appliedmath3020016

Authors: Hongli Zhou Xiao Tang Rongle Zhao

In interval-valued three-way decision, the reflection of decision-makers&rsquo; preference under the full consideration of interval-valued characteristics is particularly important. In this paper, we propose an interval-valued three-way decision model based on the cumulative prospect theory. First, by means of the interval distance measurement method, the loss function and the gain function are constructed to reflect the differences of interval radius and expectation simultaneously. Second, combined with the reference point, the prospect value function is utilized to reflect decision-makers&rsquo; different risk preferences for gains and losses. Third, the calculation method of cumulative prospect value for taking action is given through the transformation of the prospect value function and cumulative weight function. Then, the new decision rules are deduced based on the principle of maximizing the cumulative prospect value. Finally, in order to verify the effectiveness and feasibility of the algorithm, the prospect value for decision-making and threshold changes are analyzed under different risk attitudes and different radii of the interval-valued decision model. In addition, compared with the interval-valued decision rough set model, our method in this paper has better decision prospects.

]]>AppliedMath doi: 10.3390/appliedmath3020015

Authors: Ivie Stein Md Nurul Raihen

In this work, we studied convergence rates using quotient convergence factors and root convergence factors, as described by Ortega and Rheinboldt, for Hestenes&rsquo; Gram&ndash;Schmidt conjugate direction method without derivatives. We performed computations in order to make a comparison between this conjugate direction method, for minimizing a nonquadratic function f, and Newton&rsquo;s method, for solving &nabla;f=0. Our primary purpose was to implement Hestenes&rsquo; CGS method with no derivatives and determine convergence rates.

]]>AppliedMath doi: 10.3390/appliedmath3010014

Authors: Nasrin Shabani Amin Beheshti Helia Farhood Matt Bower Michael Garrett Hamid Alinejad-Rokny

Numerous studies have established a correlation between creativity and intrinsic motivation to learn, with creativity defined as the process of generating original and valuable ideas, often by integrating perspectives from different fields. The field of educational technology has shown a growing interest in leveraging technology to promote creativity in the classroom, with several studies demonstrating the positive impact of creativity on learning outcomes. However, mining creative thinking patterns from educational data remains a challenging task, even with the proliferation of research on adaptive technology for education. This paper presents an initial effort towards formalizing educational knowledge by developing a domain-specific Knowledge Base that identifies key concepts, facts, and assumptions essential for identifying creativity patterns. Our proposed pipeline involves modeling raw educational data, such as assessments and class activities, as a graph to facilitate the contextualization of knowledge. We then leverage a rule-based approach to enable the mining of creative thinking patterns from the contextualized data and knowledge graph. To validate our approach, we evaluate it on real-world datasets and demonstrate how the proposed pipeline can enable instructors to gain insights into students&rsquo; creative thinking patterns from their activities and assessment tasks.

]]>AppliedMath doi: 10.3390/appliedmath3010013

Authors: Radhakumari Maya Muhammed Rasheed Irshad Muhammed Ahammed Christophe Chesneau

In this research work, a new three-parameter lifetime distribution is introduced and studied. It is called the Harris extended Bilal distribution due to its construction from a mixture of the famous Bilal and Harris distributions, resulting from a branching process. The basic properties, such as the moment generating function, moments, quantile function, and R&eacute;nyi entropy, are discussed. We show that the hazard rate function has ideal features for modeling increasing, upside-down bathtub, and roller-coaster data sets. In a second part, the Harris extended Bilal model is investigated from a statistical viewpoint. The maximum likelihood estimation is used to estimate the parameters, and a simulation study is carried out. The flexibility of the proposed model in a hydrological data analysis scenario is demonstrated using two practical data sets and compared with important competing models. After that, we establish an acceptance sampling plan that takes advantage of all of the features of the Harris extended Bilal model. The operating characteristic values, the minimum sample size that corresponds to the maximum possible defects, and the minimum ratios of lifetime associated with the producer&rsquo;s risk are discussed.

]]>AppliedMath doi: 10.3390/appliedmath3010012

Authors: Jasmine Renee Evans Asamoah Nkwanta

The leftmost column entries of RNA arrays I and II count the RNA numbers that are related to RNA secondary structures from molecular biology. RNA secondary structures sometimes have mutations and wobble pairs. Mutations are random changes that occur in a structure, and wobble pairs are known as non-Watson&ndash;Crick base pairs. We used topics from RNA combinatorics and Riordan array theory to establish connections among combinatorial objects related to linear trees, lattice walks, and RNA arrays. In this paper, we establish interesting new explicit bijections (one-to-one correspondences) involving certain subclasses of linear trees, lattice walks, and RNA secondary structures. We provide an interesting generalized lattice walk interpretation of RNA array I. In addition, we provide a combinatorial interpretation of RNA array II as RNA secondary structures with n bases and k base-point mutations where &omega; of the structures contain wobble base pairs. We also establish an explicit bijection between RNA structures with mutations and wobble bases and a certain subclass of lattice walks.

]]>AppliedMath doi: 10.3390/appliedmath3010011

Authors: Wullianallur Raghupathi Viju Raghupathi Aditya Saharia

This research studies the occurrence of data breaches in healthcare provider settings regarding patient data. Using visual analytics and data visualization tools, we study the distribution of healthcare breaches by state. We review the main causes and types of breaches, as well as their impact on both providers and patients. The research shows a range of data breach victims. Network servers are the most popular location for common breaches, such as hacking and information technology (IT) incidents, unauthorized access, theft, loss, and improper disposal. We offer proactive recommendations to prepare for a breach. These include, but are not limited to, regulatory compliance, implementing policies and procedures, and monitoring network servers. Unfortunately, the results indicate that the probability of data breaches will continue to rise.

]]>AppliedMath doi: 10.3390/appliedmath3010010

Authors: Christophe Chesneau

Copula analysis was created to explain the dependence of two or more quantitative variables. Due to the need for in-depth data analysis involving complex variable relationships, there is always a need for new copula models with original features. As a modern example, for the analysis of circular or periodic data types, trigonometric copulas are particularly attractive and recommended. This is, however, an underexploited topic. In this article, we propose a new collection of eight trigonometric and hyperbolic copulas, four based on the sine function and the others on the tangent function, all derived from the construction of the famous Farlie&ndash;Gumbel&ndash;Morgenstern copula. In addition to their original trigonometric and hyperbolic functionalities, the proposed copulas have the feature of depending on three parameters with complementary roles: one is a dependence parameter; one is a shape parameter; and the last can be viewed as an angle parameter. In our main findings, for each of the eight copulas, we determine a wide range of admissible values for these parameters. Subsequently, the capabilities, features, and functions of the new copulas are thoroughly examined. The shapes of the main functions of some copulas are illustrated graphically. Theoretically, symmetry in general, stochastic dominance, quadrant dependence, tail dependence, Archimedean nature, correlation measures, and inference on the parameters are investigated. Some copula shapes are illustrated with the help of figures. On the other hand, some two-dimensional inequalities are established and may be of separate interest.

]]>AppliedMath doi: 10.3390/appliedmath3010009

Authors: Akihiro Nishiyama Shigenori Tanaka Jack A. Tuszynski

We show renormalization in Quantum Brain Dynamics (QBD) in&nbsp;3+1 dimensions, namely Quantum Electrodynamics with water rotational dipole fields. First, we introduce the Lagrangian density for QBD involving terms of water rotational dipole fields, photon fields and their interactions. Next, we show Feynman diagrams with 1-loop self-energy and vertex function in dipole coupling expansion in QBD. The counter-terms are derived from the coupling expansion of the water dipole moment. Our approach will be applied to numerical simulations of Kadanoff&ndash;Baym equations for water dipoles and photons to describe the breakdown of the rotational symmetry of dipoles, namely memory formation processes. It will also be extended to the renormalization group method for QBD with running parameters in multi-scales.

]]>AppliedMath doi: 10.3390/appliedmath3010008

Authors: Changlun Ye Xianbing Luo

A multilevel Monte Carlo (MLMC) method is applied to simulate a stochastic optimal problem based on the gradient projection method. In the numerical simulation of the stochastic optimal control problem, the approximation of expected value is involved, and the MLMC method is used to address it. The computational cost of the MLMC method and the convergence analysis of the MLMC gradient projection algorithm are presented. Two numerical examples are carried out to verify the effectiveness of our method.

]]>AppliedMath doi: 10.3390/appliedmath3010007

Authors: Quan Yuan Zhixin Yang

The eigenvalue bounds of interval matrices are often required in some mechanical and engineering fields. In this paper, we improve the theoretical results presented in a previous paper &ldquo;A property of eigenvalue bounds for a class of symmetric tridiagonal interval matrices&rdquo; and provide a fast algorithm to find the upper and lower bounds of the interval eigenvalues of a class of symmetric tridiagonal interval matrices.

]]>AppliedMath doi: 10.3390/appliedmath3010006

Authors: AppliedMath Editorial Office AppliedMath Editorial Office

High-quality academic publishing is built on rigorous peer review [...]

]]>AppliedMath doi: 10.3390/appliedmath3010005

Authors: Garri Davydyan

Different hypotheses of carcinogenesis have been proposed based on local genetic factors and physiologic mechanisms. It is assumed that changes in the metric invariants of a biologic system (BS) determine the general mechanisms of cancer development. Numerous pieces of data demonstrate the existence of three invariant feedback patterns of BS: negative feedback (NFB), positive feedback (PFB) and reciprocal links (RL). These base patterns represent basis elements of a Lie algebra sl(2,R) and an imaginary part of coquaternion. Considering coquaternion as a model of a functional core of a BS, in this work a new geometric approach has been introduced. Based on this approach, conditions of the system are identified with the points of three families of hypersurfaces in R42: hyperboloids of one sheet, hyperboloids of two sheets and double cones. The obtained results also demonstrated the correspondence of an indefinite metric of coquaternion quadratic form with negative and positive entropy contributions of the base elements to the energy level of the system. From that, it can be further concluded that the anabolic states of the system will correspond to the points of a hyperboloid of one sheet, whereas catabolic conditions correspond to the points of a hyperboloid of two sheets. Equilibrium states will lie in a double cone. Physiologically anabolic and catabolic states dominate intermittently oscillating around the equilibrium. Deterioration of base elements increases positive entropy and causes domination of catabolic states, which is the main metabolic determinant of cancer. Based on these observations and the geometric representation of a BS&rsquo;s behavior, it was shown that conditions related to cancer metabolic malfunction will have a tendency to remain inside the double cone.

]]>AppliedMath doi: 10.3390/appliedmath3010004

Authors: Alexander Robitzsch

Linking errors in item response models quantify the dependence on the chosen items in means, standard deviations, or other distribution parameters. The jackknife approach is frequently employed in the computation of the linking error. However, this jackknife linking error could be computationally tedious if many items were involved. In this article, we provide an analytical approximation of the jackknife linking error. The newly proposed approach turns out to be computationally much less demanding. Moreover, the new linking error approach performed satisfactorily for datasets with at least 20 items.

]]>AppliedMath doi: 10.3390/appliedmath3010003

Authors: Serban Raicu Dorinela Costescu Mihaela Popa

Queue systems are essential in the modelling of transport systems. Increasing requirements from the beneficiaries of logistic services have led to a broadening of offerings. Consequently, models need to consider transport entities with priorities being assigned in relation to the costs corresponding to different classes of customers and/or processes. Waiting lines and queue disciplines substantially affect queue system performance. This paper aims to identify a solution for decreasing the waiting time, the total time in the system, and, overall, the cost linked to queueing delays. The influence of queue discipline on the waiting time and the total time in the system is analysed for several cases: (i) service for priority classes at the same rate of service with and without interruptions, and (ii) service for several priority classes with different service rates. The presented analysis is appropriate for increasing the performance of services dedicated to freight for two priority classes. It demonstrates how priority service can increase system performance by reducing the time in the system for customers with high costs. In addition, in the considered settings, the total time in the system is reduced for all customers, which leads to resource savings for system infrastructures.

]]>AppliedMath doi: 10.3390/appliedmath3010002

Authors: Miriam Di Ianni

Graph dynamics for a node-labeled graph is a set of updating rules describing how the labels of each node in the graph change in time as a function of the global set of labels. The underpopulation rule is graph dynamics derived by simplifying the set of rules constituting the Game of Life. It is known that the number of label configurations met by a graph during the dynamic process defined by such rule is bounded by a polynomial in the size of the graph if the graph is undirected. As a consequence, predicting the labels evolution is an easy problem (i.e., a problem in P) in such a case. In this paper, the generalization of the underpopulation rule to signed and directed graphs is studied. It is here proved that the number of label configurations met by a graph during the dynamic process defined by any so generalized underpopulation rule is still bounded by a polynomial in the size of the graph if the graph is undirected and structurally balanced, while it is not bounded by any polynomial in the size of the graph if the graph is directed although unsigned unless P = PSpace.

]]>AppliedMath doi: 10.3390/appliedmath3010001

Authors: Yaxi Li Yue Kai

The main idea of this paper is to study the chaotic behavior of Zakharov&ndash;Kuznetsov equation with perturbation. By taking the traveling wave transformation, we transform the perturbed Zakharov&ndash;Kuznetsov equation with dual-power law and triple-power law nonlinearity into planar dynamic systems, and then analyze how the external perturbed terms affect the chaotic behavior. We emphasize here that there is no chaotic phenomenon for the non-perturbed ZK equation, thus it is only caused by the external perturbed terms.

]]>AppliedMath doi: 10.3390/appliedmath2040044

Authors: Stephanie Were Somtochukwu Godfrey Nnabuife Boyu Kuang

The current handling of gas associated with oil production poses an environmental risk. This gas is being flared off due to the technical and economic attractiveness of this option. As flared gases are mainly composed of methane, they have harmful greenhouse effects when released into the atmosphere. This work discusses the effectiveness of using this gas for enhanced oil recovery (EOR) purposes as an alternative to flaring. In this study, a micromodel was designed with properties similar to a sandstone rock with a porosity of 0.4, and computational fluid dynamics (CFD) techniques were applied to design an EOR system. Temperature effects were not considered in the study, and the simulation was run at atmospheric pressure. Five case studies were carried out with different interfacial tensions between the oil and gas (0.005 N/m, 0.017 N/m, and 0.034 N/m) and different injection rates for the gas (1 &times; 10&minus;3 m/s, 1 &times; 10&minus;4 m/s, and 1 &times; 10&minus;6 m/s). The model was compared with a laboratory experiment measuring immiscible gas flooding. Factors affecting oil recoveries, such as the interfacial tension between oil and gas, the viscosity, and the pressure, were studied in detail. The results showed that the surface tension between the oil and gas interphase was a limiting factor for maximum oil recovery. The lower surface tension recovered 33% of the original oil in place. The capillary pressure was higher than the pressure in the micromodel, which lowered the amount of oil that was displaced. The study showed the importance of pressure maintenance to increase oil recovery for immiscible gas floods. It is recommended that a wider set of interfacial tensions between oil and gas be tested to obtain a range at which oil recovery is maximum for EOR with flared gas.

]]>AppliedMath doi: 10.3390/appliedmath2040043

Authors: Sina Aghakhani Mohammad Sadra Rajabi

In general, customers are looking to receive their orders in the fastest time possible and to make purchases at a reasonable price. Consequently, the importance of having an optimal delivery time is increasingly evident these days. One of the structures that can meet the demand for large supply chains with numerous orders is the hierarchical integrated hub structure. Such a structure improves efficiency and reduces chain costs. To make logistics more cost-effective, hub-and-spoke networks are necessary as a means to achieve economies of scale. Many hub network design models only consider hub type but do not take into account the hub scale measured using freight volume. This paper proposes a multi-objective scheduling model for hierarchical hub structures (HHS), which is layered from top to bottom. In the third layer, the central hub takes factory products from decentralized hubs and sends them to other decentralized hubs to which customers are connected. In the second layer, non-central hubs are responsible for receiving products from the factory and transferring them to central hubs. These hubs are also responsible for receiving products from central hubs and sending them to customers. Lastly, the first layer contains factories responsible for producing products and providing for their customers. The factory uses the flexible flow-shop platform and structure to produce its products. The model&rsquo;s objective is to minimize transportation and production costs as well as product arrival times. To validate and evaluate the model, small instances have been solved and analyzed in detail with the weighted sum and &epsilon;-constraint method. Consequently, based on the mean ideal distance (MID) metric, two methods were compared for the designed instances.

]]>AppliedMath doi: 10.3390/appliedmath2040042

Authors: Dongyung Kim

Recently, a crowd crush accident occurred in Seoul. Mathematics and data science can contribute to understanding this incident and to avoiding future accidents. In this paper, I suggest an optimized monitoring methodology to avoid crowd crush accidents with scattered data by searching the global minimum of the minimax data or minsum data. These scattered data are the position data of cell phones with time t. Mathematically, I find an exact solution of the optimized monitoring region with the suggested methodology by using the minimal constraints. The methodology is verified and validated along with the efficiency.

]]>AppliedMath doi: 10.3390/appliedmath2040041

Authors: Ayan Bhattacharya

This paper obtains a measure-theoretic restriction that must be satisfied by a prior probability measure for posteriors to be computed in limited time. Specifically, it is shown that the prior must be factorizable. Factorizability is a set of independence conditions for events in the sample space that allows agents to calculate posteriors using only a subset of the dataset. The result has important implications for models in mathematical economics and finance that rely on a common prior. If one introduces the limited time restriction to Aumann&rsquo;s famous Agreeing to Disagree setup, one sees that checking for factorizability requires agents to have access to every event in the measure space, thus severely limiting the scope of the agreement result.

]]>AppliedMath doi: 10.3390/appliedmath2040040

Authors: Robert Gardner Matthew Gladin

The well-known Enestr&ouml;m&ndash;Kakeya Theorem states that, for P(z)=&sum;&#8467;=0na&#8467;z&#8467;, a polynomial of degree n with real coefficients satisfying 0&le;a0&le;a1&le;&#8943;&le;an, then all the zeros of P lie in |z|&le;1 in the complex plane. Motivated by recent results concerning an Enestr&ouml;m&ndash;Kakeya &ldquo;type&rdquo; condition on real coefficients, we give similar results with hypotheses concerning the real and imaginary parts of the coefficients and concerning the moduli of the coefficients. In this way, our results generalize the other recent results.

]]>AppliedMath doi: 10.3390/appliedmath2040039

Authors: Paul C. Arpin Mihail Popa Daniel B. Turner

The motions of nuclei in a molecule can be mathematically described by using normal modes of vibration, which form a complete orthonormal basis. Each normal mode describes oscillatory motion at a frequency determined by the momentum of the nuclei. Near equilibrium, it is common to apply the quantum harmonic-oscillator model, whose eigenfunctions intimately involve combinatorics. Each electronic state has distinct force constants; therefore, each normal-mode basis is distinct. Duschinsky proposed a linearized approximation to the transformation between the normal-mode bases of two electronic states using a rotation matrix. The rotation angles are typically obtained by using quantum-chemical computations or via gas-phase spectroscopy measurements. Quantifying the rotation angles in the condensed phase remains a challenge. Here, we apply a two-dimensional harmonic model that includes a Duschinsky rotation to condensed-phase femtosecond coherence spectra (FCS), which are created in transient&ndash;absorption spectroscopy measurements through impulsive excitation of coherent vibrational wavepackets. Using the 2D model, we simulate spectra to identify the signatures of Duschinsky rotation. The results suggest that peak multiplicities and asymmetries may be used to quantify the rotation angle, which is a key advance in condensed-phase molecular spectroscopy.

]]>AppliedMath doi: 10.3390/appliedmath2040038

Authors: Débora N. Diniz Breno N. S. Keller Mariana T. Rezende Andrea G. C. Bianchi Claudia M. Carneiro Renata R. e R. Oliveira Eduardo J. S. Luz Daniela M. Ushizima Fátima N. S. de Medeiros Marcone J. F. Souza

Screening of Pap smear images continues to depend upon cytopathologists&rsquo; manual scrutiny, and the results are highly influenced by professional experience, leading to varying degrees of cell classification inaccuracies. In order to improve the quality of the Pap smear results, several efforts have been made to create software to automate and standardize the processing of medical images. In this work, we developed the CEA (Cytopathologist Eye Assistant), an easy-to-use tool to aid cytopathologists in performing their daily activities. In addition, the tool was tested by a group of cytopathologists, whose feedback indicates that CEA could be a valuable tool to be integrated into Pap smear image analysis routines. For the construction of the tool, we evaluate different YOLO configurations and classification approaches. The best combination of algorithms uses YOLOv5s as a detection algorithm and an ensemble of EfficientNets as a classification algorithm. This configuration achieved 0.726 precision, 0.906 recall, and 0.805 F1-score when considering individual cells. We also made an analysis to classify the image as a whole, in which case, the best configuration was the YOLOv5s to perform the detection and classification tasks, and it achieved 0.975 precision, 0.992 recall, 0.970 accuracy, and 0.983 F1-score.

]]>AppliedMath doi: 10.3390/appliedmath2040037

Authors: Evangelos Ioannidis Dimitrios Tsoumaris Dimitrios Ntemkas Iordanis Sarikeisoglou

ESG ratings are data-driven indices, focused on three key pillars (Environmental, Social, and Governance), which are used by investors in order to evaluate companies and countries, in terms of Sustainability. A reasonable question which arises is how these ratings are associated to each other. The research purpose of this work is to provide the first analysis of correlation networks, constructed from ESG ratings of selected economies. The networks are constructed based on Pearson correlation and analyzed in terms of some well-known tools from Network Science, namely: degree centrality of the nodes, degree centralization of the network, network density and network balance. We found that the Prevalence of Overweight and Life Expectancy are the most central ESG ratings, while unexpectedly, two of the most commonly used economic indicators, namely the GDP growth and Unemployment, are at the bottom of the list. China&rsquo;s ESG network has remarkably high positive and high negative centralization, which has strong implications on network&rsquo;s vulnerability and targeted controllability. Interestingly, if the sign of correlations is omitted, the above result cannot be captured. This is a clear example of why signed network analysis is needed. The most striking result of our analysis is that the ESG networks are extremely balanced, i.e. they are split into two anti-correlated groups of ESG ratings (nodes). It is impressive that USA&rsquo;s network achieves 97.9% balance, i.e. almost perfect structural split into two anti-correlated groups of nodes. This split of network structure may have strong implications on hedging risk, if we see ESG ratings as underlying assets for portfolio selection. Investing into anti-correlated assets, called as "hedge assets", can be useful to offset potential losses. Our future direction is to apply and extend the proposed signed network analysis to ESG ratings of corporate organizations, aiming to design optimal portfolios with desired balance between risk and return.

]]>AppliedMath doi: 10.3390/appliedmath2040036

Authors: Zhenkun Zhang Hongjian Lai

The cutwidth minimization problem consists of finding an arrangement of the vertices of a graph G on a line Pn with n=|V(G)| vertices in such a way that the maximum number of overlapping edges (i.e., the congestion) is minimized. A graph G with a cutwidth of k is k-cutwidth critical if every proper subgraph of G has a cutwidth less than k and G is homeomorphically minimal. In this paper, we first verified some structural properties of k-cutwidth critical unicyclic graphs with k&gt;1. We then mainly investigated the critical unicyclic graph set T with a cutwidth of four that contains fifty elements, and obtained a forbidden subgraph characterization of 3-cutwidth unicyclic graphs.

]]>AppliedMath doi: 10.3390/appliedmath2040035

Authors: Vasant Chavan

The aim of the present paper is to study and investigate the geometrical properties of a concircular curvature tensor on generalized Sasakian-space-forms. In this manner, we obtained results for &#981;-concircularly flat, &#981;-semisymmetric, locally concircularly symmetric and locally concircularly &#981;-symmetric generalized Sasakian-space-forms. Finally, we construct examples of the generalized Sasakian-space-forms to verify some results.

]]>AppliedMath doi: 10.3390/appliedmath2040034

Authors: Stanislav Yu. Lukashchuk

A nonlocally perturbed linear Schr&ouml;dinger equation with a small parameter was derived under the assumption of low-level fractionality by using one of the known general nonlocal wave equations with an infinite power-law memory. The problem of finding approximate symmetries for the equation is studied here. It has been shown that the perturbed Schr&ouml;dinger equation inherits all symmetries of the classical linear equation. It has also been proven that approximate symmetries corresponding to Galilean transformations and projective transformations of the unperturbed equation are nonlocal. In addition, a special class of nonlinear, nonlocally perturbed Schr&ouml;dinger equations that admits an approximate nonlocal extension of the Galilei group is derived. An example of constructing an approximately invariant solution for the linear equation using approximate scaling symmetry is presented.

]]>AppliedMath doi: 10.3390/appliedmath2040033

Authors: Feng Zhao Shao-Lun Huang

While non-linear activation functions play vital roles in artificial neural networks, it is generally unclear how the non-linearity can improve the quality of function approximations. In this paper, we present a theoretical framework to rigorously analyze the performance gain of using non-linear activation functions for a class of residual neural networks (ResNets). In particular, we show that when the input features for the ResNet are uniformly chosen and orthogonal to each other, using non-linear activation functions to generate the ResNet output averagely outperforms using linear activation functions, and the performance gain can be explicitly computed. Moreover, we show that when the activation functions are chosen as polynomials with the degree much less than the dimension of the input features, the optimal activation functions can be precisely expressed in the form of Hermite polynomials. This demonstrates the role of Hermite polynomials in function approximations of ResNets.

]]>AppliedMath doi: 10.3390/appliedmath2040032

Authors: Stefanos Samaras Christine Böckmann Christoph Ritter

Extracting information about the shape or size of non-spherical aerosol particles from limited optical radar data is a well-known inverse ill-posed problem. The purpose of the study is to figure out a robust and stable regularization method including an appropriate parameter choice rule to address the latter problem. First, we briefly review common regularization methods and investigate a new iterative family of generalized Runge&ndash;Kutta filter regularizers. Next, we model a spheroidal particle ensemble and test with it different regularization methods experimenting with artificial data pertaining to several atmospheric scenarios. We found that one method of the newly introduced generalized family combined with the L-curve method performs better compared to traditional methods.

]]>AppliedMath doi: 10.3390/appliedmath2040031

Authors: Richard A. Chechile

The retention of human memory is a process that can be understood from a hazard-function perspective. Hazard is the conditional probability of a state change at time t given that the state change did not yet occur. After reviewing the underlying mathematical results of hazard functions in general, there is an analysis of the hazard properties associated with nine theories of memory that emerged from psychological science. Five theories predict strictly monotonically decreasing hazard whereas the other four theories predict a peaked-shaped hazard function that rises initially to a peak and then decreases for longer time periods. Thus, the behavior of hazard shortly after the initial encoding is the critical difference among the theories. Several theorems provide a basis to explore hazard for the initial time period after encoding in terms of a more practical surrogate function that is linked to the behavior of the hazard function. Evidence for a peak-shaped hazard function is provided and a case is made for one particular psychological theory of memory that posits that memory encoding produces two redundant representations that have different hazard properties. One memory representation has increasing hazard while the other representation has decreasing hazard.

]]>AppliedMath doi: 10.3390/appliedmath2040030

Authors: Robert Vrabel

In this short paper, we study the problem of traversing a crossbar through a bent channel, which has been formulated as a nonlinear convex optimization problem. The result is a MATLAB code that we can use to compute the maximum length of the crossbar as a function of the width of the channel (its two parts) and the angle between them. In case they are perpendicular to each other, the result is expressed analytically and is closely related to the astroid curve (a hypocycloid with four cusps).

]]>AppliedMath doi: 10.3390/appliedmath2030029

Authors: Mikhail Sgibnev

We consider the inhomogeneous Wiener&ndash;Hopf equation whose kernel is a nonarithmetic probability distribution with positive mean. The inhomogeneous term behaves like a submultiplicative function. We establish asymptotic properties of the solution to which the successive approximations converge. These properties depend on the asymptotics of the submultiplicative function.

]]>AppliedMath doi: 10.3390/appliedmath2030028

Authors: Ted Gyle Lewis

A mathematical description of catastrophe in complex systems modeled as a network is presented with emphasis on network topology and its relationship to risk and resilience. We present mathematical formulas for computing risk, resilience, and likelihood of faults in nodes/links of network models of complex systems and illustrate the application of the formulas to simulation of catastrophic failure. This model is not related to nonlinear &ldquo;Catastrophe theory&rdquo; by Ren&eacute; Thom, E.C. Zeeman and others. Instead, we present a strictly probabilistic network model for estimating risk and resilience&mdash;two useful metrics used in practice. We propose a mathematical model of exceedance probability, risk, and resilience and show that these properties depend wholly on vulnerability, consequence, and properties of the network representation of the complex system. We use simulation of the network under simulated stress causing one or more nodes/links to fail, to extract properties of risk and resilience. In this paper two types of stress are considered: viral cascades and flow cascades. One unified definition of risk, MPL, is proposed, and three kinds of resilience illustrated&mdash;viral cascading, blocking node/link, and flow resilience. The principal contributions of this work are new equations for risk and resilience and measures of resilience based on vulnerability of individual nodes/links and network topology expressed in terms of spectral radius, bushy, and branchy metrics. We apply the model to a variety of networks&mdash;hypothetical and real&mdash;and show that network topology needs to be included in any definition of network risk and resilience. In addition, we show how simulations can identify likely future faults due to viral and flow cascades. Simulations of this nature are useful to the practitioner.

]]>AppliedMath doi: 10.3390/appliedmath2030027

Authors: M. Rodrigo D. Zulkarnaen

A general population model with variable carrying capacity consisting of a coupled system of nonlinear ordinary differential equations is proposed, and a procedure for obtaining analytical solutions for three broad classes of models is provided. A particular case is when the population and carrying capacity per capita growth rates are proportional. As an example, a generalised Thornley&ndash;France model is given. Further examples are given when the growth rates are not proportional. A criterion when inflexion may occur is also provided, and results of numerical simulations are presented.

]]>AppliedMath doi: 10.3390/appliedmath2030026

Authors: Edoardo Ballico

Let X&sub;Pr be an integral and non-degenerate variety. A &ldquo;cost-function&rdquo; (for the Zariski topology, the semialgebraic one, or the Euclidean one) is a semicontinuous function w:=[1,+&infin;)&cup;+&infin; such that w(a)=1 for a non-empty open subset of X. For any q&isin;Pr, the rank rX,w(q) of q with respect to (X,w) is the minimum of all &sum;a&isin;Sw(a), where S is a finite subset of X spanning q. We have rX,w(q)&lt;+&infin; for all q. We discuss this definition and classify extremal cases of pairs (X,q). We give upper bounds for all rX,w(q) (twice the generic rank) not depending on w. This notion is the generalization of the case in which the cost-function w is the constant function 1. In this case, the rank is a well-studied notion that covers the tensor rank of tensors of arbitrary formats (PARAFAC or CP decomposition) and the additive decomposition of forms. We also adapt to cost-functions the rank 1 decomposition of real tensors in which we allow pairs of complex conjugate rank 1 tensors.

]]>AppliedMath doi: 10.3390/appliedmath2030025

Authors: Frédéric Ouimet

In this paper, we develop local expansions for the ratio of the centered matrix-variate T density to the centered matrix-variate normal density with the same covariances. The approximations are used to derive upper bounds on several probability metrics (such as the total variation and Hellinger distance) between the corresponding induced measures. This work extends some previous results for the univariate Student distribution to the matrix-variate setting.

]]>AppliedMath doi: 10.3390/appliedmath2030024

Authors: Dmitry Ponomarev

We review several one-dimensional problems such as those involving linear Schr&ouml;dinger equation, variable-coefficient Helmholtz equation, Zakharov&ndash;Shabat system and Kubelka&ndash;Munk equations. We show that they all can be reduced to solving one simple antilinear ordinary differential equation u&prime;x=fxux&macr; or its nonhomogeneous version u&prime;x=fxux&macr;+gx, x&isin;0,x0&sub;R. We point out some of the advantages of the proposed reformulation and call for further investigation of the obtained ODE.

]]>AppliedMath doi: 10.3390/appliedmath2030023

Authors: Johanna Barzen Frank Leymann

Shor&rsquo;s algorithm for prime factorization is a hybrid algorithm consisting of a quantum part and a classical part. The main focus of the classical part is a continued fraction analysis. The presentation of this is often short, pointing to text books on number theory. In this contribution, we present the relevant results and proofs from the theory of continued fractions in detail (even in more detail than in text books), filling the gap to allow a complete comprehension of Shor&rsquo;s algorithm. Similarly, we provide a detailed computation of the estimation of the probability that convergents will provide the period required for determining a prime factor.

]]>AppliedMath doi: 10.3390/appliedmath2030022

Authors: Tohru Morita

Discussions are presented by Morita and Sato on the problem of obtaining the particular solution of an inhomogeneous differential equation with polynomial coefficients in terms of the Green&rsquo;s function. In a paper, the problem is treated in distribution theory, and in another paper, the formulation is given on the basis of nonstandard analysis, where fractional derivative of degree, which is a complex number added by an infinitesimal number, is used. In the present paper, a simple recipe based on nonstandard analysis, which is closely related with distribution theory, is presented, where in place of Heaviside&rsquo;s step function H(t) and Dirac&rsquo;s delta function &delta;(t) in distribution theory, functions H&#1013;(t):=1&Gamma;(1+&#1013;)t&#1013;H(t) and &delta;&#1013;(t):=ddtH&#1013;(t)=1&Gamma;(&#1013;)t&#1013;&minus;1H(t) for a positive infinitesimal number &#1013;, are used. As an example, it is applied to Kummer&rsquo;s differential equation.

]]>AppliedMath doi: 10.3390/appliedmath2030021

Authors: Aris Alexopoulos

New definitions for fractional integro-differential operators are presented and referred to as delayed fractional operators. It is shown that delayed fractional derivatives give rise to the notion of functional order differentiation. Functional differentiation can be used to establish dualities and asymptotic mixtures between unrelated theories, something that conventional fractional or integer operators cannot do. In this paper, dualities and asymptotic mixtures are established between arbitrary functions, probability densities, the Gibbs&ndash;Shannon entropy and Hellinger distance, as well as higher-dimensional particle geometries in quantum mechanics.

]]>AppliedMath doi: 10.3390/appliedmath2030020

Authors: Darin J. Ulness

This work focuses on the structure and properties of the triangular numbers modulo m. The most important aspect of the structure of these numbers is their periodic nature. It is proven that the triangular numbers modulo m forms a 2m-cycle for any m. Additional structural features and properties of this system are presented and discussed. This discussion is aided by various representations of these sequences, such as network graphs, and through discrete Fourier transformation. The concept of saturation is developed and explored, as are monoid sets and the roles of perfect squares and nonsquares. The triangular numbers modulo m has self-similarity and scaling features which are discussed as well.

]]>AppliedMath doi: 10.3390/appliedmath2020019

Authors: Giancarlo Pastor Jae-Oh Woo

This paper introduces a new measure of quantum entropy, called the effective quantum entropy (EQE). The EQE is an extension, to the quantum setting, of a recently derived classical generalized entropy. We present a thorough verification of its properties. As its predecessor, the EQE is a semi-strict quasi-concave function; it would be capable of generating many of the various measures of quantum entropy that are useful in practice. Thereafter, we construct a consistent estimator for our proposed measure and empirically test its estimation error, under different system dimensions and number of measurements. Overall, we build the grounds of the EQE, which will facilitate the analyses and verification of the next innovative quantum technologies.

]]>AppliedMath doi: 10.3390/appliedmath2020018

Authors: Niklas Wulkow

A statistical, data-driven method is presented that quantifies influences between variables of a dynamical system. The method is based on finding a suitable representation of points by fuzzy affiliations with respect to landmark points using the Scalable Probabilistic Approximation algorithm. This is followed by the construction of a linear mapping between these affiliations for different variables and forward in time. This linear mapping, or matrix, can be directly interpreted in light of unidirectional dependencies, and relevant properties of it are quantified. These quantifications, given by the sum of singular values and the average row variance of the matrix, then serve as measures for the influences between variables of the dynamics. The validity of the method is demonstrated with theoretical results and on several numerical examples, covering deterministic, stochastic, and delayed types of dynamics. Moreover, the method is applied to a non-classical example given by real-world basketball player movement, which exhibits highly random movement and comes without a physical intuition, contrary to many examples from, e.g., life sciences.

]]>AppliedMath doi: 10.3390/appliedmath2020017

Authors: Efthimios Providas

First, we develop a direct operator method for solving boundary value problems for a class of nth order linear Volterra&ndash;Fredholm integro-differential equations of convolution type. The proposed technique is based on the assumption that the Volterra integro-differential operator is bijective and its inverse is known in closed form. Existence and uniqueness criteria are established and the exact solution is derived. We then apply this method to construct the closed form solution of the fourth order equilibrium equations for the bending of Euler&ndash;Bernoulli beams in the context of Eringen&rsquo;s nonlocal theory of elasticity (two phase integral model) under a transverse distributed load and simply supported boundary conditions. An easy to use algorithm for obtaining the exact solution in a symbolic algebra system is also given.

]]>AppliedMath doi: 10.3390/appliedmath2020016

Authors: Daniel A. Griffith

Today, calculus frequently is taught with artificial intelligence in the form of computer algebra systems. Although these software packages may reduce tedium associated with the mechanics of calculus, they may be less effective if not supplemented by the accompanying teaching of calculus theory. This paper presents two examples from spatial statistics in which computer software in an unsupervised auto-execution mode fails, or can fail, to yield correct calculus results. Accordingly, it emphasizes the need to teach calculus theory when using software packages such as Mathematica and Maple.

]]>AppliedMath doi: 10.3390/appliedmath2020015

Authors: Alexander Y. Klimenko

Complex adaptive and evolutionary systems can, at least in principle, be modelled in ways that are similar to modelling of complex mechanical (or physical) systems. While quantitative modelling of turbulent reacting flows has been developed over many decades due to availability of experimental data, modelling of complex evolutionary systems is still in its infancy and has huge potential for further development. This work analyses recent trends, points to the similarity of modelling approaches used in seemingly different areas, and suggests a basic classification for such approaches. Availability of data in the modern computerised world allows us to use tools previously developed in physics and applied mathematics in new domains of scientific inquiry that previously were not amendable by quantitative evaluation and modelling, while raising concerns about the associated ethical and legal issues. While the utility of big data has been repeatedly demonstrated in various practical applications, these applications, as far as we can judge, do not involve the scientific goal of conceptual modelling of emergent collective behaviour in complex evolutionary systems.

]]>AppliedMath doi: 10.3390/appliedmath2020014

Authors: Stephen W. Carden Jedidiah O. Lindborg Zheni Utic

Reinforcement learning (RL) is a subdomain of machine learning concerned with achieving optimal behavior by interacting with an unknown and potentially stochastic environment. The exploration strategy for choosing actions is an important component for enabling the decision agent to discover how to obtain high rewards. If constructed well, it may reduce the learning time of the decision agent. Exploration in discrete problems has been well studied, but there are fewer strategies applicable to continuous dynamics. In this paper, we propose a Low-Discrepancy Action Selection (LDAS) process, a novel exploration strategy for environments with continuous states and actions. This algorithm focuses on prioritizing unknown regions of the state-action space with the intention of finding ideal actions faster than pseudo-random action selection. Results of experimentation with three benchmark environments elucidate the situations in which LDAS is superior and introduce a metric for quantifying the quality of exploration.

]]>AppliedMath doi: 10.3390/appliedmath2020013

Authors: Rajeev Rajaram Nathan Ritchey

We use a payment pattern of the type {1k,2k,3k,&hellip;} to generalize the standard level payment and increasing annuity to polynomial payment patterns. We derive explicit formulas for the present value of an n-year polynomial annuity, the present value of an m-monthly n-year polynomial annuity, and the present value of an n-year continuous polynomial annuity. We also use the idea to extend the annuities to payment patterns derived from analytic functions, as well as to payment patterns of the type {1r,2r,3r,&hellip;}, with r being an arbitrary real number. In the process, we develop possible approximations to k! and for the gamma function evaluated at real numbers.

]]>AppliedMath doi: 10.3390/appliedmath2020012

Authors: Achiya Dax

The Kaczmarz method is an important tool for solving large sparse linear systems that arise in computerized tomography. The Kaczmarz anomaly phenomenon has been observed recently when solving certain types of random systems. This raises the question of whether a similar anomaly occurs in tomography problems. The aim of the paper is to answer this question, to examine the extent of the phenomenon and to explain its reasons. Another tested issue is the ability of random row shuffles to sharpen the anomaly and to accelerate the rate of convergence. The results add important insight into the nature of the Kaczmarz method.

]]>AppliedMath doi: 10.3390/appliedmath2020011

Authors: Aman Bhargava Mohammad R. Rezaei Milad Lankarany

An ongoing challenge in neural information processing is the following question: how do neurons adjust their connectivity to improve network-level task performance over time (i.e., actualize learning)? It is widely believed that there is a consistent, synaptic-level learning mechanism in specific brain regions, such as the basal ganglia, that actualizes learning. However, the exact nature of this mechanism remains unclear. Here, we investigate the use of universal synaptic-level algorithms in training connectionist models. Specifically, we propose an algorithm based on reinforcement learning (RL) to generate and apply a simple biologically-inspired synaptic-level learning policy for neural networks. In this algorithm, the action space for each synapse in the network consists of a small increase, decrease, or null action on the connection strength. To test our algorithm, we applied it to a multilayer perceptron (MLP) neural network model. This algorithm yields a static synaptic learning policy that enables the simultaneous training of over 20,000 parameters (i.e., synapses) and consistent learning convergence when applied to simulated decision boundary matching and optical character recognition tasks. The trained networks yield character-recognition performance comparable to identically shaped networks trained with gradient descent. The approach has two significant advantages in comparison to traditional gradient-descent-based optimization methods. First, the robustness of our novel method and its lack of reliance on gradient computations opens the door to new techniques for training difficult-to-differentiate artificial neural networks, such as spiking neural networks (SNNs) and recurrent neural networks (RNNs). Second, the method&rsquo;s simplicity provides a unique opportunity for further development of local information-driven multiagent connectionist models for machine intelligence analogous to cellular automata.

]]>AppliedMath doi: 10.3390/appliedmath2020010

Authors: Dominic Cortis Muhsin Tamturk

The sports betting industry has been growing at a phenomenal rate and has many similarities to the financial market in that a payout is made contingent on an outcome of an event. Despite this, there has been little to no mathematical focus on the potential ruin of bookmakers. In this paper, the expected profit of a bookmaker and probability of multiple soccer matches are observed via Dirac notations and Feynman&rsquo;s path calculations. Furthermore, we take the unforeseen circumstances into account by subjecting the betting process to more uncertainty. A perturbed betting process, set by modifying the conventional stochastic process, is handled to scale and manage this uncertainty.

]]>AppliedMath doi: 10.3390/appliedmath2010009

Authors: Frank Lad

I produce a coherent mathematical formulation of the supplementary variables structure for Aspect&rsquo;s experimental test of Bell&rsquo;s inequality as devised by Clauser, Horne, Shimony, and Holt, a formalization which has been widely considered to be impossible. Contrary to Aspect&rsquo;s understanding, it is made clear that a supplementary variable formulation can represent any tendered probability distribution whatsoever. This includes both the QM distribution and the &ldquo;naive distribution&rdquo;, which he had suggested as a foil. It has long been known that quantum theory does not support a complete distribution for the components of the thought experiment that underlies the inequality. However, further than that, here I identify precisely the bounding polytope of distributions that do cohere with both its explicit premises and with the prospect of supplementary variables. In this context, it is found once again that every distribution within this polytope respects the conditions of Bell&rsquo;s inequality, and that the famous evaluation of the gedankenexpectation defying it as 22 is mistaken. The argument is relevant to all subsequent embellishments of experimental methodology post Aspect, designed to block seven declared possible loopholes. The probabilistic prognostications of quantum theory are not denied, nor are the experimental observations. However, their inferential implications have been misrepresented.

]]>AppliedMath doi: 10.3390/appliedmath2010008

Authors: Jacopo Giacomelli

We design a simple technique to control the position of a localized matter wave. Our system is composed of two counter-phased periodic potentials and a third optical lattice, which can be either periodic or disordered. The only control needed on the system is a three-state switch that allows the sudden selection of the desired potential. The method is proposed as a possible new alternative to achieving the realization of a multi-state bit. We show that this framework is robust, and that the multi-state bit behavior can be observed under weak assumptions. Given the current degree of development of matter wave control in optical lattices, we believe that the proposed device would be easily reproducible in a laboratory, allowing for testing and industrial applications.

]]>AppliedMath doi: 10.3390/appliedmath2010007

Authors: Hanno Essén Johan C.-E. Stén

Wigner showed that a sufficiently thin electron gas will condense into a crystal of localized electrons. Here, we show, using a model based on cubic charge distributions that gives exact results, that the Coulomb repulsion energy of localized charge distributions is lower than that of delocalized distributions in spite of the fact that the total overall charge distribution is the same. Assuming a simple cubic geometry, we obtain an explicit result for the energy reduction. This reduction results from the exclusion of self-interactions of the electrons. The corresponding results for electron pairs are also discussed.

]]>AppliedMath doi: 10.3390/appliedmath2010006

Authors: Nathan Ritchey Rajeev Rajaram

We use the representation of a continuous time Hattendorff differential equation and Matlab to compute 2&sigma;t(j), the solution of a backwards in time differential equation that describes the evolution of the variance of Lt(j), the loss at time t random variable for a multi-state Markovian process, given that the state at time t is j. We demonstrate this process by solving examples of several instances of a multi-state model which a practitioner can use as a guide to solve and analyze specific multi-state models. Numerical solutions to compute the variance 2&sigma;t(j) enable practitioners and academic researchers to test and simulate various state-space scenarios, with possible transitions to and from temporary disabilities, to permanent disabilities, to and from good health, and eventually to a deceased state. The solution method presented in this paper allows researchers and practitioners to easily compute the evolution of the variance of loss without having to resort to detailed programming.

]]>AppliedMath doi: 10.3390/appliedmath2010005

Authors: Anant Gupta Idriss J. Aberkane Sourangshu Ghosh Adrian Abold Alexander Rahn Eldar Sultanow

This paper investigates the behavior of rotating binaries. A rotation by r digits to the left of a binary number B exhibits in particular cases the divisibility l&#8739;N1(B)&middot;r+1, where l is the bit-length of B and N1(B) is the Hamming weight of B, that is the number of ones in B. The integer r is called the left-rotational distance. We investigate the connection between this rotational distance, the length, and the Hamming weight of binary numbers. Moreover, we follow the question under which circumstances the above-mentioned divisibility is true. We have found out and will demonstrate that this divisibility occurs for kn+c cycles.

]]>AppliedMath doi: 10.3390/appliedmath2010004

Authors: Georg J. Schmitz

Mereotopology is a concept rooted in analytical philosophy. The phase-field concept is based on mathematical physics and finds applications in materials engineering. The two concepts seem to be disjoint at a first glance. While mereotopology qualitatively describes static relations between things, such as x isConnected y (topology) or x isPartOf y (mereology) by first order logic and Boolean algebra, the phase-field concept describes the geometric shape of things and its dynamic evolution by drawing on a scalar field. The geometric shape of any thing is defined by its boundaries to one or more neighboring things. The notion and description of boundaries thus provides a bridge between mereotopology and the phase-field concept. The present article aims to relate phase-field expressions describing boundaries and especially triple junctions to their Boolean counterparts in mereotopology and contact algebra. An introductory overview on mereotopology is followed by an introduction to the phase-field concept already indicating its first relations to mereotopology. Mereotopological axioms and definitions are then discussed in detail from a phase-field perspective. A dedicated section introduces and discusses further notions of the isConnected relation emerging from the phase-field perspective like isSpatiallyConnected, isTemporallyConnected, isPhysicallyConnected, isPathConnected, and wasConnected. Such relations introduce dynamics and thus physics into mereotopology, as transitions from isDisconnected to isPartOf can be described.

]]>AppliedMath doi: 10.3390/appliedmath2010003

Authors: Jaya P. N. Bishwal

For stationary ergodic diffusions satisfying nonlinear homogeneous It&ocirc; stochastic differential equations, this paper obtains the Berry&ndash;Esseen bounds on the rates of convergence to normality of the distributions of the quasi maximum likelihood estimators based on stochastic Taylor approximation, under some regularity conditions, when the diffusion is observed at equally spaced dense time points over a long time interval, the high-frequency regime. It shows that the higher-order stochastic Taylor approximation-based estimators perform better than the basic Euler approximation in the sense of having smaller asymptotic variance.

]]>AppliedMath doi: 10.3390/appliedmath2010002

Authors: Theodore P. Hill

This article introduces a new stochastic non-isotropic frictional abrasion model, in the form of a single short partial integro-differential equation, to show how frictional abrasion alone of a stone on a planar beach might lead to the oval shapes observed empirically. The underlying idea in this theory is the intuitive observation that the rate of ablation at a point on the surface of the stone is proportional to the product of the curvature of the stone at that point and the likelihood the stone is in contact with the beach at that point. Specifically, key roles in this new model are played by both the random wave process and the global (non-local) shape of the stone, i.e., its shape away from the point of contact with the beach. The underlying physical mechanism for this process is the conversion of energy from the wave process into the potential energy of the stone. No closed-form or even asymptotic solution is known for the basic equation, which is both non-linear and non-local. On the other hand, preliminary numerical experiments are presented in both the deterministic continuous-time setting using standard curve-shortening algorithms and a stochastic discrete-time polyhedral-slicing setting using Monte Carlo simulation.

]]>AppliedMath doi: 10.3390/appliedmath2010001

Authors: Athanasios Fragkou Avraam Charakopoulos Theodoros Karakasidis Antonios Liakopoulos

Understanding the underlying processes and extracting detailed characteristics of rivers is critical and has not yet been fully developed. The purpose of this study was to examine the performance of non-linear time series methods on environmental data. Specifically, we performed an analysis of water level measurements, extracted from sensors, located on specified stations along the Nestos River (Greece), with Recurrence Plots (RP) and Recurrence Quantification Analysis (RQA) methods. A more detailed inspection with the sliding windows (epoqs) method was applied on the Recurrence Rate, Average Diagonal Line and Trapping Time parameters, with results showing phase transitions providing useful information about the dynamics of the system. The suggested method seems to be promising for the detection of the dynamical transitions that can characterize distinct time windows of the time series and reveals information about the changes in state within the whole time series. The results will be useful for designing the energy policy investments of producers and also will be helpful for dam management assessment as well as government energy policy.

]]>AppliedMath doi: 10.3390/appliedmath1010005

Authors: Vasilios N. Katsikis Spyridon D. Mourtas

In finance, the most efficient portfolio is the tangency portfolio, which is formed by the intersection point of the efficient frontier and the capital market line. This paper defines and explores a time-varying tangency portfolio under nonlinear constraints (TV-TPNC) problem as a nonlinear programming (NLP) problem. Because meta-heuristics are commonly used to solve NLP problems, a semi-integer beetle antennae search (SIBAS) algorithm is proposed for solving cardinality constrained NLP problems and, hence, to solve the TV-TPNC problem. The main results of numerical applications in real-world datasets demonstrate that our method is a splendid substitute for other evolutionary methods.

]]>AppliedMath doi: 10.3390/appliedmath1010004

Authors: Anik Gomes Jahangir Alam Ghulam Murtaza Tahmina Sultana Efstratios E. Tzirtzilakis Mohammad Ferdows

The aim of the present study is to analyze the effects of aligned magnetic field and radiation on biomagnetic fluid flow and heat transfer over an unsteady stretching sheet with various slip conditions. The magnetic field is assumed to be sufficiently strong enough to saturate the ferrofluid, and the variation of magnetization is approximated by a linear function of temperature difference. The governing boundary layer equations with boundary conditions are simplified by suitable transformations. Numerical solution is obtained by using the bvp4c function technique in MATLAB software. The numerical results are derived for the velocity, temperature, the skin friction coefficient, and the rate of heat transfer. The evaluated results are compared with analytical study documented in scientific literature. The present investigation illustrates that the fluid velocity is decreased with the increasing values of radiation parameter, magnetic parameter, and ferromagnetic interaction parameter, though is increased as the Prandtl number, Grashof number, permeable parameter and thermal slip parameter are increased. In this investigation, the suction/injection parameter had a good impact on the skin friction coefficient and the rate of heat transfer.

]]>AppliedMath doi: 10.3390/appliedmath1010003

Authors: Hossein Hassani Mahdi Kalantari Christina Beneki

Singular spectrum analysis (SSA) is a popular filtering and forecasting method that is used in a wide range of fields such as time series analysis and signal processing. A commonly used approach to identify the meaningful components of a time series in the grouping step of SSA is the utilization of the visual information of eigentriples. Another supplementary approach is that of employing an algorithm that performs clustering based on the dissimilarity matrix defined by weighted correlation between the components of a time series. The SSA literature search revealed that no investigation has compared the various clustering methods. The aim of this paper was to compare the effectiveness of different hierarchical clustering linkages to identify the appropriate groups in the grouping step of SSA. The comparison was performed based on the corrected Rand (CR) index as a comparison criterion that utilizes various simulated series. It was also demonstrated via two real-world time series how one can proceed, step-by-step, to conduct grouping in SSA using a hierarchical clustering method. This paper is supplemented with accompanying R codes.

]]>AppliedMath doi: 10.3390/appliedmath1010002

Authors: Christophe Chesneau

Copulas are useful functions for modeling multivariate distributions through their univariate marginal distributions and dependence structures. They have a wide range of applications in all fields of science that deal with multivariate data. While there is a plethora of copulas, those based on trigonometric functions, especially in dimensions greater than two, have received much less attention. They are, however, of interest because of the properties of oscillation and periodicity of the trigonometric functions, which can appear in certain models of correlation of natural phenomena. In order to fill this gap, this paper introduces and investigates two new types of &ldquo;multivariate trigonometric copulas&rdquo;. Their main theoretical properties are studied, and some perspectives for applications are sketched for future work. In particular, we show that the proposed copulas are symmetric, not associative, with no orthant dependence, and with copula densities that have wide oscillations, which remains an uncommon property in the field. The expressions of their multivariate Spearman&rsquo;s rho are also determined. Furthermore, the first type of the proposed copulas has the interesting feature of having a multivariate Spearman&rsquo;s rho equal to 0 for all of the dimensions. Some graphic evidence supports the findings. Some mathematical formulas involving the product of n trigonometric functions may be of independent interest.

]]>AppliedMath doi: 10.3390/appliedmath1010001

Authors: Takayuki Hibi

Mathematics has been built over thousands of years of history, since the creation of ancient civilizations [...]

]]>