Physical Sciences Forum doi: 10.3390/psf2023008001

Authors: Vishvas Pandey

A thorough understanding of neutrino&ndash;nucleus interaction physics is crucial to achieving precision goals in broader neutrino physics programs. The complexity of the nuclei comprising the detectors and the limited understanding of their weak response constitute two of the biggest systematic uncertainties in neutrino experiments&mdash;both at intermediate energies affecting short- and long-baseline neutrino programs and at lower energies affecting coherent scattering neutrino programs. While electron and neutrino interactions are different at the primary vertex, many underlying relevant physical processes in the nucleus are the same in both cases, and electron scattering data collected with precisely controlled kinematics, large statistics, and high precision allow one to constrain nuclear properties and specific interaction processes. To this end, electron&ndash;nucleus scattering experiments provide vital complementary information to test, assess, and validate different nuclear models and event generators intended to be used in neutrino experiments. In fact, for many decades, the study of electron scattering off a nucleus has been used as a tool to probe the properties of that nucleus and its electromagnetic response. While previously existing electron scattering data provide important information, new and proposed measurements are tied closely to what is required for the neutrino program in terms of expanding kinematic reach, the addition of relevant nuclei, and information on the final-state hadronic system.

]]>Physical Sciences Forum doi: 10.3390/psf2023006005

Authors: Benhammou Aissa Tedjini Hamza Guettaf Yacine Hartani Mohamed Amine

The field of energy is of great interest for development, especially in the transportation industry. This paper investigates a hybrid electric vehicle (HEV) with two-wheel drives powered by a fuel cell, battery, DC generators, and supercapacitors. Each energy source is connected to a specific controllable converter. The authors compared the energy management strategies of the Adaptive Neuro-Fuzzy Inference System (ANFIS) with classical energy management strategies. The proposed ANFIS method reduced hydrogen consumption by 8% compared to the classical approach, and improved efficiency to over 98%. The primary objective of this work is to demonstrate the impact of artificial intelligence in renewable energy management strategies (EMSs), with the aim of improving system performance as much as possible by comparing it with classical methods such as state machine (SM) and PI strategies.

]]>Physical Sciences Forum doi: 10.3390/psf2023007057

Authors: Lorenzo Iorio

In submitting conference proceedings to Physical Sciences Forum, the volume editors of the proceedings certify to the publisher that all papers published in this volume have been subjected to peer review administered by the volume editors [...]

]]>Physical Sciences Forum doi: 10.3390/psf2023006004

Authors: Boukhari Mehdi Daouia Brahmi-Ingrachen Hayet Belkacemi Laurence Muhr

In this investigation, an artificial-neural-network-based mathematical model was developed for the prediction of nickel adsorption data. As input variables, the initial concentration, adsorbent dosage, and pH of the nickel solution were chosen, while the removal efficiency was chosen as an output variable. The hyperparameters were optimized to determine the perfect topology for the model. The study demonstrated that the 3-2-1 ANN architecture was the most suitable topology. The determination coefficient of 0.98 and the mean squared error of 0.02 indicated the high performance of the developed model, which was successfully applied for isotherm data prediction.

]]>Physical Sciences Forum doi: 10.3390/psf2023006003

Authors: Lotfi Rahal

The development of new tools of functional explorations in medicine revolutionized the means of the diagnosis of different pathologies and allowed a clear improvement of the patients&rsquo; management [...]

]]>Physical Sciences Forum doi: 10.3390/psf2022005053

Authors: Keiko Uohashi

This study considers dualistic structures of the probability simplex from the information geometry perspective. We investigate a foliation by deformed probability simplexes for the transition of &alpha;-parameters, not for a fixed &alpha;-parameter. We also describe the properties of extended divergences on the foliation when different &alpha;-parameters are defined on each of the various leaves.

]]>Physical Sciences Forum doi: 10.3390/psf2022005043

Authors: Frédéric Barbaresco Ali Mohammad-Djafari Frank Nielsen Martino Trassinelli

The forty-first International Conference on Bayesian and Maximum Entropy methods in Science and Engineering (41st MaxEnt&rsquo;22) was held in Institut Henri Poincar&eacute; (IHP), Paris, 18&ndash;22 July 2022 (https://maxent22 [...]

]]>Physical Sciences Forum doi: 10.3390/psf2023006002

Authors: Leila Aliouane Sid-Ali Ouadfeul

Geothermal energy is one of the cleanest, most accessible and cheapest alternative energies in the whole world [...]

]]>Physical Sciences Forum doi: 10.3390/psf2023006001

Authors: Selma Mediene Assia Rachida Senoudi

This work deals with a numerical study of the different thermal processes in a gold nanoparticle heated with a femtosecond pulse laser and cooled in different biological tissues, such as healthy human prostate, blood, fat, tumor prostate, skin and protein myoglobin. A 40 nm diameter gold nanoparticle is heated using a femtosecond pulse laser with a duration of 85 fs and a fluence of 1.4 J/m2. A two-temperature model is used to describe the dynamics of the exchange of energy between the electron gas and the phononic lattice in addition to Fourier&rsquo;s law and the relationship between the thermal conductivity of the external medium and the temperature. The temperature of the external medium near the nanoparticle surface was computed, and the effect of the laser energy was reported.

]]>Physical Sciences Forum doi: 10.3390/psf2022005052

Authors: Łukasz Tychoniec Fabrizia Guglielmetti Philipp Arras Torsten Enßlin Eric Villard

The Atacama Large Millimeter/submillimeter Array (ALMA) is currently revolutionizing observational astrophysics. The aperture synthesis technique provides angular resolution otherwise unachievable with the conventional single-aperture telescope. However, recovering the image from inherently undersampled data is a challenging task. The clean algorithm has proven successful and reliable and is commonly used in imaging interferometric observations. It is not, however, free of limitations. Point-source assumption, central to the clean is not optimal for the extended structures of molecular gas recovered by ALMA. Additionally, negative fluxes recovered with clean are not physical. This begs the search for alternatives that would be better suited for specific scientific cases. We present recent developments in imaging ALMA data using Bayesian inference techniques, namely the resolve algorithm. This algorithm, based on information field theory, has already been successfully applied to image the Very Large Array data. We compare the capability of both clean and resolve to recover known sky signal, convoluted with the simulator of ALMA observation data, and we investigate the problem with a set of actual ALMA observations.

]]>Physical Sciences Forum doi: 10.3390/psf2022005051

Authors: Aleksandr Petrosyan Will Handley

We present a method for improving the performance of nested sampling as well as its accuracy. Building on previous work we show that posterior repartitioning may be used to reduce the amount of time nested sampling spends in compressing from prior to posterior if a suitable &ldquo;proposal&rdquo; distribution is supplied. We showcase this on a cosmological example with a Gaussian posterior, and release the code as an LGPL licensed, extensible Python package supernest.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14059

Authors: Raghav Narasimha Della Vincent Arun Kenath Chandra Sivaram

Black Holes are not expected to form in the mass range of 60 M&#8857; to 130 M&#8857; because of the Pair-Instability Supernova (PISN). However, the recent observational evidence of GW190521 does not comply with the existing theory. Here, we have looked into the effects of Dark Matter (DM) in the progenitors of PISN in terms of luminosity, lifetime and temperature and have shown that in the presence of DM particles, the progenitors can overcome the PISN stage to collapse into a black hole (BH) as a remnant.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14121

Authors: Boris E. Meierovich

Analytical spherically symmetric static solution to the set of Einstein and Klein-Gordon equations in a synchronous reference frame is considered. In a synchronous reference frame, a static solution exists in the ultrarelativistic limit p=&minus;&epsilon;/3. Pressure p is negative when matter tends to contract. The solution pretends to describe a collapsed black hole. The balance at the boundary with dark matter ensures the static solution for a black hole. There is a spherical layer inside a black hole between two &ldquo;gravitational&rdquo; radii rg and rh&gt;rg, where the solution exists, but it is not unique. In a synchronous reference frame, detgik and grr do not change signs. The non-uniqueness of solutions with boundary conditions at r=rg and r=rh makes it possible to find the gravitational field both inside and outside a black hole. The synchronous reference frame allows one to find the remaining mass of the condensate. In the model &ldquo;&lambda;&psi;4&rdquo;, total mass M=3c2/2k&thinsp;rh is three times that of what a distant observer sees. This gravitational mass defect is spent for bosons to be in the bound ground state, and for the balance between elasticity and density of the condensate.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14104

Authors: Mohammed B. Al-Fadhli

Large inconsistencies in the outcome of precise measurements of Newtonian gravitational &lsquo;constant&rsquo; were identified throughout more than three hundred experiments conducted up to date. This paper illustrates the dependency of the Newtonian gravitational parameter on the curvature of the background and the associated field strength of vacuum energy. Additionally, the derived interaction field equations show that boundary interactions and spin-spin correlations of vacuum and conventional energy densities contribute to the emergence of mass. Experimental conditions are recommended to achieve consistent outcomes of the parameter precision measurements, which can directly falsify or provide confirmations to the presented field equations.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14102

Authors: Mohammed B. Al-Fadhli

Advances in cosmology and astronomical observations have brought to light significant tensions and uncertainties within the current model of cosmology, which assumes a spatially flat Universe and is known as the &Lambda;CDM model. Moreover, the Planck Legacy 2018 release has preferred that the early Universe had a positive curvature with a confidence level more than 99%. This study reports a quantum mechanism that could potentially replace the concept of dark matter/energy by taking into the account the primordial curvature while generating the present-day spatial flatness. The approach incorporates the primordial curvature as the background curvature to extend the field equations into brane-world gravity. It utilizes a new wavefunction of the Universe that propagates in the bulk with respect to the scale factor and curvature radius of the early Universe upon the emission of the cosmic microwave background. The resulting wavefunction yields both positive and negative solutions, revealing the presence of a pair of entangled wavefunctions as a manifestation of the creation of matter and antimatter sides of the Universe. The wavefunction shows a nascent hyperbolic expansion away from early energy in opposite directions followed by a first decelerating expansion phase during the first ~10 Gyr and a subsequent accelerating expansion phase in reverse directions. During the second phase, both Universe sides are free-falling towards each other under gravitational acceleration. The simulation of the predicted background curvature evolution shows that the early curved background caused galaxies to experience external fields, resulting in the fast orbital speed of outer stars. Finally, the wavefunction predicts that the Universe will eventually undergo a rapid contraction phase resulting in a Big Crunch, which reveals a cyclic Universe.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14100

Authors: Pantelis S. Apostolopoulos Christos Tsipogiannis

In a recent paper, a new conformally flat metric was introduced, describing an expanding scalar field in a spherically symmetric geometry. The spacetime can be interpreted as a Schwarzschild-like model with an apparent horizon surrounding the curvature singularity. For the above metric, we present the complete conformal Lie algebra consisting of a six-dimensional subalgebra of isometries (Killing Vector Fields or KVFs) and nine proper conformal vector fields (CVFs). An interesting aspect of our findings is that there exists a gradient (proper) conformal symmetry (i.e., its bivector Fab vanishes) which verifies the importance of gradient symmetries in constructing viable cosmological models. In addition, the 9-dimensional conformal algebra implies the existence of constants of motion along null geodesics that allow us to determine the complete solution of null geodesic equations.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14101

Authors: Meysam Motaharfar Parampreet Singh

It has recently been shown that the tunneling wavefunction proposal is consistent with loop quantum geometry corrections, including both holonomy and inverse scale factor corrections, in the gravitational part of a spatially closed isotropic model with a positive cosmological constant. However, in the presence of inflationary potential, the initial singularity is kinetic-dominated, and the effective minisuperspace potential again diverges at the zero scale factor. As the wavefunction in loop quantum cosmology cannot increase towards the zero scale factor, the tunneling wavefunction seems incompatible. We show that consistently including inverse scale factor modifications, in scalar field Hamiltonian, changes the effective potential into a barrier potential, allowing the tunneling proposal. We also discuss the potential quantum instability of the cyclic universe, resulting from tunneling.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14097

Authors: Avadhut V. Purohit

This paper shows that the field defined by the Wheeler&ndash;DeWitt equation for pure gravity is neither a standard gravitational field nor the field representing a particular universe. The theory offers a unified description of geometry and matter, with geometry being fundamental. The quantum theory possesses gravitational decoherence when the signature of R(3) changes. The quantum theory resolves singularities dynamically. Application to the FLRW &kappa;=0 shows the creation of local geometries during quantum evolution. The 3-metric is modified near the classical singularity in the case of the Schwarzschild geometry.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14072

Authors: Michael Romano

Galactic feedback (i.e., outflows) plays a fundamental role in regulating galaxy formation and evolution. We investigate the physical properties of galactic outflows in a sample of 29 local low-metallicity dwarf galaxies drawn from the Dwarf Galaxy Survey. We make use of Herschel/PACS archival data to detect outflows in the broad wings of observed [CII] 158 &mu;m line profiles. We detect outflowing gas in 1/3 of the sample, and in the average galaxy population through line stacking. We find typical mass-loading factors (i.e., outflow efficiencies) of the order of unity. Outflow velocities are larger than the velocities required from gas to escape the gravitational potential of our targets, suggesting that a significant amount of gas and dust is brought out of their halos. Our results will be used as input for chemical models, posing new constraints on the processes of dust production/destruction in the interstellar medium of galaxies.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14067

Authors: Michael Romano

Galaxies are thought to grow through star formation or by interacting with each other. To understand which process dominates, we investigated the contribution of major mergers to the galaxy mass assembly across cosmic time. We made use of recent observations from the ALPINE survey to analyze the morphology and kinematic information provided by the [CII] 158 &mu;m line observed in z&sim;5 star-forming galaxies. We found that 40% of galaxies in that epoch were undergoing merging. By combining our results with studies at lower redshift, we computed the cosmic evolution of the merger fraction, estimating that major mergers could contribute up to 30% to the cosmic star-formation rate density at z&gt;4.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14063

Authors: Mohammed B. Al-Fadhli

The recent Planck Legacy 2018 release verified the presence of an enhanced lensing amplitude in the power spectra of the cosmic microwave background with a confidence level of over 99%, which implies that the early Universe had a positive curvature. In this study, the curvature of the early Universe is regarded as the curvature of 4D conformal bulk while celestial objects that induce a localized curvature in the bulk are considered as 4D relativistic cloud-worlds. Likewise, quantum fields are considered as 4D relativistic quantum clouds that are affected by the curvature of the bulk as a manifestation of gravity. This approach could eliminate the singularities and satisfy the conditions of a conformal invariance theory.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14061

Authors: Abdellah Touati Slimane Zaim

In this paper, we investigate the four classical tests of general relativity in the non-commutative (NC) gauge theory of gravity. Using the Seiberg&ndash;Witten (SW) map and the star product, we calculate the deformed metric components g^&mu;&nu;(r,&Theta;) of the Schwarzschild black hole (SBH). The use of this deformed metric enables us to calculate the gravitational periastron advance of mercury, the red shift, the deflection of light, and time delays in the NC spacetime. Our results for the NC prediction of the gravitational deflection of light and time delays show a newer behavior than the classical one. As an application, we use a typical primordial black hole to estimate the NC parameter &Theta;, where our results show &Theta;phy&asymp;10&minus;34&nbsp;m for the gravitational red shift, the deflection of light, and time delays at the final stage of inflation, and &Theta;phy&asymp;10&minus;31&nbsp;m for the gravitational periastron advance of some planets from our solar system.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14065

Authors: U. V. Satya Seshavatharam S. Lakshminarayana

Based on light speed expansion, a modified red shift formula, a scaled Hawking&rsquo;s black hole temperature formula, the super gravity of galactic baryon matter and baby Planck ball, in our recent publications we have clearly established a novel model of quantum cosmology. In this contribution, we appeal for the need of reviewing the basics of Lambda cosmology in the context of cosmic quantum spin. We would like to emphasize the point that spin is a basic property of quantum mechanics, and one who is interested in developing quantum models of cosmology must think about cosmic rotation. It may also be noted that, without a radial in-flow of matter in all directions towards one specific point, one cannot expect a big crunch and without a big crunch one cannot expect a big bang. Really, if there was a &ldquo;big bang&rdquo; in the past, with reference to the formation of the big bang as predicted by General Theory of Relativity (GTR) and with reference to the cosmic rate of expansion that might have taken place simultaneously in all directions at a &ldquo;naturally selected rate&rdquo; about the point of big bang, the &ldquo;point&rdquo; of the big bang can be considered as the characteristic reference point of cosmic expansion in all directions. Thinking in this way, either the point of big bang or baby Planck ball can be considered as a possible centre of cosmic evolution.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14064

Authors: Pantelis S. Apostolopoulos

In this work, we would like to address the problem of the effect of bulk matter on the brane cosmological evolution in a general way. We assume that the spatial part of the brane metric is not maximally symmetric, and is, therefore, spatially inhomogeneous. However, we retain the conformal flatness property of the standard cosmological model (FRW), i.e., the Weyl tensor of the induced 4D geometry is zero. We refer to it as Spatially Inhomogeneous Irrotational (SII) brane. It is shown that the model can be regarded as the 5D generalization of the SII spacetimes found recently.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14062

Authors: Siwaphiwe Jokweni Vijay Singh Aroonkumar Beesham Binaya Kumar Bishi

A locally rotationally symmetric Bianchi-I model is explored both in general relativity and in f(R,T) gravity, where R is the Ricci scalar and T is the trace of the energy-momentum tensor. Solutions have been found by means of a special Hubble parameter, yielding a hyperbolic hybrid scale factor. Some geometrical parameters have been studied. A comparison is made between solutions in general relativity and in f(R,T) gravity, where in both the theories, the models exhibit rich behaviour from stiff matter to quintessence, phantom, then later mimicking the cosmological constant, depending on some parameters.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14066

Authors: Abdel Nasser Tawfik

Whether an algebraic or a geometric or a phenomenological prescription is applied, the first fundamental form is unambiguously related to the modeling of the curved spacetime. Accordingly, we assume that the possible quantization of the first fundamental form could be proposed. For precise accurate measurement of the first fundamental form ds2=g&mu;&nu;dx&mu;dx&nu;, the author derived a quantum-induced revision of the fundamental tensor. To this end, the four-dimensional Riemann manifold is extended to the eight-dimensional Finsler manifold, in which the quadratic restriction on the length measure is relaxed, especially in the relativistic regime; the minimum measurable length could be imposed ad hoc on the Finsler structure. The present script introduces an approach to quantize the fundamental tensor and first fundamental form. Based on gravitized quantum mechanics, the resulting relativistic generalized uncertainty principle (RGUP) is directly imposed on the Finsler structure, F(x^0&mu;,p^0&nu;), which is obviously homogeneous to one degree in p^0&mu;. The momentum of a test particle with mass m&macr;=m/mp with mp is the Planck mass. This unambiguously results in the quantized first fundamental form ds&tilde;2=[1+(1+2&beta;p^0&rho;p^0&rho;)m&macr;2(|x&uml;|/A)2]g&mu;&nu;dx^&mu;dx^&nu;, where x&uml; is the proper spacelike four-acceleration, A is the maximal proper acceleration, and &beta; is the RGUP parameter. We conclude that an additional source of curvature associated with the mass m&macr;, whose test particle is accelerated at |x&uml;|, apparently emerges. Thereby, quantizations of the fundamental tensor and first fundamental form are feasible.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14060

Authors: Olga Avsajanishvili Lado Samushia

We studied the following scalar field &#981;CDM models: ten quintessence models and seven phantom models. We reconstructed these models using the phenomenological method developed by our group. For each potential, the following ranges were found: (i) model parameters; (ii) EoS parameters; and (iii) the initial conditions for differential equations, which describe the dynamics of the universe. Using MCMC analysis, we obtained the constraints of scalar field models by comparing observations for the expansion rate of the universe, the angular diameter distance and the growth rate function, with corresponding data generated for the fiducial &Lambda;CDM model. We applied Bayes statistical criteria to compare scalar field models. To this end, we calculated the Bayes factor, as well as the AIC and BIC information criteria. The results of this analysis show that we could not uniquely identify the preferable scalar field &#981;CDM models compared to the fiducial &Lambda;CDM model based on the predicted DESI data, and that the &Lambda;CDM model is a true dark energy model. We investigated scalar field &#981;CDM models in the w0&ndash;wa phase spaces of the CPL-&Lambda;CDM contours. We identified subclasses of quintessence and phantom scalar field models that, in the present epoch: (i) can be distinguished from the &Lambda;CDM model; (ii) cannot be distinguished from the &Lambda;CDM model; and (iii) can be either distinguished or undistinguished from the &Lambda;CDM model. We found that all the studied models can be divided into two classes: models that have attractor solutions and models whose evolution depends on initial conditions.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14035

Authors: Manabendra Sharma Shankar Dayal Pathak Shiyuan Li

We investigate the background dynamics of a class of models with noncanonical scalar field and matter both in Friedmann Lemaitre Robertson Walker (FLRW) closed and open spacetime. The detailed dynamical system analysis is carried out in a bouncing scenario. Cosmological solutions satisfying the stability and bouncing conditions are obtained using the tools of the dynamical system.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14052

Authors: Delaram Mirfendereski

Superconformal mechanics describes superparticle dynamics in near-horizon geometries of supersymmetric black holes. We systematically study the minimal compatible set of constraints required for a gauged superconformal symmetry. Our study uncovers classes of sigma models, which are only scale invariant in their ungauged form and become fully conformal invariant only after gauging.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14051

Authors: Vlasios Petousis Martin Veselský Jozef Leja Ch. C. Moustakidis G. A. Souliotis A. Bonasera Laura Navarro

Neutron stars are like nuclear physics laboratories, providing a unique opportunity to apply and search for new physics. In that spirit, we explored novel concepts of nuclear physics studied in a neutron star environment. Firstly, we investigated the reported 17 MeV boson, which has been proposed as an explanation to the 8Be, 4He and 12C anomaly, in the context of its possible influence on the neutron star structure, defining a universal Equation of State. Next, we investigated the synthesis of hyper-heavy elements under conditions simulating the neutron star environment.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14054

Authors: Avraham Nofech

Two equations whose variables take values in the Pauli algebra of complex quaternions are shown to be equivalent to the standard Dirac equation and its Hermitian conjugate taken together. They are transformed one into the other by an outer automorphism of the Pauli algebra. Given a solution to the Dirac equation, a new solution is obtained by multiplying it on the right by one of the 16 matrices of the Pauli group. This defines a homomorphism from the Pauli group into the group of discrete symmetries, whose kernel is a cyclic group of order four. The group of discrete symmetries is shown to be the Klein four-group consisting of four elements: the identity Id; the charge conjugation symmetry C; the mass inversion symmetry M; and their composition in either order, CM = MC. The mass inversion symmetry inverts the sign of the mass, leaving the electric charge unchanged. The outer &ldquo;bar-star&rdquo; automorphism is identified with the parity operation, resulting in proof of CPT = M or, equivalently, CPTM = Identity.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14057

Authors: Binaya Kumar Bishi Pratik Vijay Lepse Aroonkumar Beesham

We investigated the Friedmann&ndash;Lemaitre&ndash;Robertson&ndash;Walker (FLRW) cosmological models within the framework of Rastall gravity incorporating particle creation. The modified field equations for Rastall gravity are derived, and exact solutions are obtained under various types of scale factors. The qualitative behaviour of our solutions depends on the Rastall coupling parameter &psi;=k&lambda;. Following the literature, we have restricted the Rastall coupling parameter &psi;(k=1) to the range &minus;0.0001&lt;&psi;&lt;0.0007 at 68% CL from CMB+BAO data. Furthermore, we have discussed the distinct physical behaviour of the derived models in detail.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14050

Authors: Pravin Kumar Dahal

Geometric optics approximation sufficiently describes the effects in the near-earth environment, and Faraday rotation is purely a reference frame effect in this limit. A simple encoding procedure could mitigate the Faraday phase error. However, the framework of geometric optics is not sufficient to describe the propagation of waves of large but finite frequencies. So, we outline the technique to solve the equations for the propagation of an electromagnetic wave up to the subleading order geometric optics expansion in curved spacetimes. For this, we first need to construct a set of parallel propagated null tetrads in curved spacetimes. Then we should use the parallel propagated tetrad to solve the modified trajectory equation. The wavelength-dependent deviation of the electromagnetic waves is observed, which gives the mathematical description of the gravitational spin Hall effect.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14048

Authors: Mahmoud AlHallak

Single field inflationary models are investigated within Palatini quadratic gravity, represented by R+&alpha;R2, along with a non-minimal coupling of the form f(&#981;)R between the inflaton field &#981; and the gravity. The treatment is performed in the Einstein frame, where the minimal coupling to gravity is recovered through conformal transformation. We consider various limits of the model with different inflationary scenarios characterized as canonical slow-roll inflation in the limit &alpha;&#981;&#729;2&#8810;(1+f(&#981;)), constant-roll k-inflation for &alpha;&#8810;1, and slow-roll K-inflation for &alpha;&#8811;1. A cosine and exponential potential are examined with the limits mentioned above and different well-motivated non-minimal couplings to gravity. We compare the theoretical results, exemplified by the tensor-to-scalar r ratio and spectral index ns, with the recent observational results of Planck 2018 and BICEP/Keck. Furthermore, we include the results of a new study forecast precision with which ns and r can be constrained by currently envisaged observations, including CMB (Simons Observatory, CMB-S4, and LiteBIRD).

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14056

Authors: Francesco Nozzoli Cinzia Cernetti

Nuclei that are unstable with respect to double beta decay are potentially interesting for a novel Dark Matter (DM) direct detection approach. In particular, a Majorana DM fermion inelastically scattering on a double beta unstable nucleus could stimulate its decay. Thanks to the exothermic nature of the stimulated double beta decay, this detection approach would allow for also investigating light DM fermions, a class of DM candidates that evade the detection capability of the traditional elastic scattering experiments. The upper limits on the nucleus scattering cross sections and the expected signal distribution for different DM masses are shown and compared with the existing data for the case of the 76Ge nucleus.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14053

Authors: Siamak Tafazoli

This paper presents a theoretical calculation of the vacuum energy density by summing the contributions of all quantum fields&rsquo; vacuum states, which turns out to indicate that there seems to be a missing bosonic contribution in order to match the predictions of the current cosmological models and all observational data to date. The basis for this calculation is a new Zeta function regularization method used to tame the infinities present in the improper integrals of power functions. The paper also presents a few other contributions in the area of vacuum energy.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14058

Authors: Vitalii Vertogradov Maxim Misyura

In this paper, we consider using the gravitational decoupling method to obtain a hairy regular black hole which corresponds to the Hayward model. We modify the hairy Schwarzschild solution to obtain the regular Kretschmann scalar. The energy momentum of a new model is considered, and we show that there is an energy exchange between its parts.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14049

Authors: Bhupendra Kumar Shukla Rishi Kumar Tiwari Aroonkumar Beesham

In this paper, we have studied an anisotropic Bianchi-I cosmological model in f(R,T) gravity. To obtain the exact solutions of the field equations, we have used the condition &sigma;/&theta; to be a function of the scale factor (IJTP, 54, 2740-2757, 2015). Our model possesses an initial singularity. It initially exhibits decelerating expansion and transits to accelerating expansion at late times. We have also discussed the physical and geometrical properties of the model.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14055

Authors: Riccardo Nicolaidis Francesco Nozzoli Giancarlo Pepponi Pierluigi Bellutti Evgeny Demenev Francesco Maria Follega Roberto Iuppa Veronica Vilona

An accurate flux measurement of low-energy charged particles trapped in the magnetosphere is necessary for space weather characterization and to study the coupling between the lithosphere and magnetosphere, which allows for the investigation of the correlations between seismic events and particle precipitation from Van Allen belts. In this work, the project of a CubeSat space spectrometer, the Low-Energy Module (LEM), is shown. The detector will be able to perform an event-based measurement of the energy, arrival direction, and composition of low-energy charged particles down to 0.1 MeV. Moreover, thanks to a CdZnTe mini-calorimeter, the LEM spectrometer also allows for photon detection in the sub-MeV range, joining the quest for the investigation of the nature of Gamma-ray bursts. The particle identification of the LEM relies on the &Delta;E&minus;E technique performed by thin silicon detectors. This multipurpose spectrometer will fit within a 10 &times; 10 &times; 10 cm3 CubeSat frame, and it will be constructed as a joint project between the University of Trento, FBK, and INFN-TIFPA. To fulfil the size and mass requirements, an innovative approach, based on active particle collimation, was designed for the LEM; this avoids the heavy/bulky passive collimators of previous space detectors. In this paper, we will present the LEM geometry, its detection concept, and the results from the developed GEANT4 simulation.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14046

Authors: Majd Shalak Jean-Michel Alimi

In this paper, we study the dynamical and statistical properties of the cosmic web and investigate their ability to infer the corresponding cosmological model. Our definition of the cosmic web is based on the local dimensionality of the gravitational collapse that classifies the cosmic web into four categories: voids, walls, filaments, and nodes. Our results show that each category has its specific non-Gaussian evolution over time and that these non-Gaussianities depend on the cosmological parameters. Nonetheless, the non-Gaussianities in each category exist even at early epochs when the matter field has a Gaussian distribution. Additionally, by using deep learning techniques, we show that leveraging the cosmic web information engenders an improved inference of cosmological parameters, when compared to merely using the matter field.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14045

Authors: Esma Zouaoui Noureddine Mebarki

In this paper, we present a flowchart of the Gamma Ray Burst (GRB) afterglows, aiming to create a numerical FORTRAN code. Considering several proposed models, the hydrodynamic evolution describing the external shock of the jet with the environment surrounding the GRB source or the Interstellar medium is discussed. A comparison of the results and data, considering the synchrotron emission as a basic mechanism for the radiation part, was also carried out.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14040

Authors: Paola Marziani

Supermassive black holes accreting matter at very high, perhaps even super-Eddington rates appear in the sky as a special class of luminous active galactic nuclei. The Eigenvector 1/quasar main sequence parameter space allows for the definition of easy-to-implement selection criteria in the rest-frame visual and UV spectral ranges. The systematic trends of the main sequence are believed to reflect a change in accretion modes: at high accretion rates, an optically thick, geometrically thick, advection-dominated accretion disk is expected to develop. Even if the physical processes occurring in advection-dominated accretion flows are still not fully understood, a robust inference from the models&mdash;supported by a wealth of observational data&mdash;is that these extreme quasars should radiate at maximum radiative efficiency for a given black hole mass. A key empirical result is that lines emitted by ionic species of low ionization are mainly broadened because of virial motions even in such extreme radiative conditions. &ldquo;Virial luminosity&rdquo; estimates from emission line widths then become possible, in analogy to the scaling laws defined for galaxies. In this contribution, we summarize aspects related to their structure and to the complex interplay between accretion flow and line emitting region, involving dynamics of the line emitting regions, metal content, and spectral energy distribution.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14047

Authors: Abdel Nasser Tawfik

At finite isospin chemical potential &mu;I, the tension between measured decays and partial branching ratios of neutral and charged bosons as functions of dimuon mass squared and the Standard Model (SM) isospin asymmetry can be analyzed in nonperturbative QCD-effective models, for instance, the Polyakov linear sigma-model. With almost first-principle derivation of the explicit isospin symmetry breaking, namely, &sigma;&macr;3=fK&plusmn;&minus;fK0 the isospin sigma field, and h3=ma02fK&plusmn;&minus;fK0 the third generator of the matrix of the explicit symmetry breaking H=Taha. fK&plusmn; and fK0 are decay constants of K&plusmn; and K0, respectively. ma0 is the mass of a0 meson. Accordingly, the QCD phase structure could be extended to finite &mu;I. With the thermal and density dependence of a0, fK&plusmn;, and fK0, &sigma;&macr;3 and h3 are accordingly expressed in dependence on the temperatures and the chemical potentials. We find that the resulting critical chiral temperatures T&chi; decrease with the increase in &mu;B and/or &mu;I. We conclude that the (T&chi;&minus;&mu;I) boundary has almost the same structure as that of the (T&chi;&minus;&mu;B) plane.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14032

Authors: Matteo Califano Ivan de Martino Daniele Vernieri Salvatore Capozziello

Gravitational wave (GW) astronomy provides an independent way to estimate cosmological parameters. The detection of GWs from a coalescing binary allows a direct measurement of its luminosity distance, so these sources are referred to as &ldquo;standard sirens&rdquo; in analogy to standard candles. We investigate the impact of constraining cosmological models on the Einstein Telescope, a third-generation detector which will detect tens of thousands of binary neutron stars. We focus on non-flat &Lambda;CDM cosmology and some dark energy models that may resolve the so-called Hubble tension. To evaluate the accuracy down to which ET will constrain cosmological parameters, we consider two types of mock datasets depending on whether or not a short gamma-ray burst is detected and associated with the gravitational wave event using the THESEUS satellite. Depending on the mock dataset, different statistical estimators are applied: one assumes that the redshift is known, and another marginalizes it, taking a specific prior distribution.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14039

Authors: Sergey Vernov Vsevolod Ivanov

We consider F(R) cosmological models with a scalar field. For the R2 model in the spatially flat Friedmann&ndash;Lema&icirc;tre&ndash;Robertson&ndash;Walker metric, the Ricci scalar R can smoothly change its sign during the evolution if and only if the scalar field is a phantom one. In the Bianchi I metric, the Ricci scalar cannot smoothly change its sign if the corresponding solution is anisotropic at R=0. This result does not depend on the type of the scalar field. In the Bianchi I metric, the general solution of evolution equations has been obtained.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14038

Authors: Değer Sofuoğlu Aroonkumar Beesham

It is well known that the universe is undergoing accelerated expansion during recent times and that it underwent a decelerated expansion in early times. The deceleration parameter, essentially the second derivative of the scale factor, can be used to describe these eras, with a negative parameter for acceleration and a positive parameter for deceleration. Apart from the standard &Lambda;CDM model in general relativity, there are many cosmological models in various other theories of gravity. In order to describe these models, especially the deviation from general relativity, the jerk parameter was introduced, which is basically the third derivative of the scale factor. In the &Lambda;CDM model in general relativity, the jerk parameter j is constant, and j=1. The constant jerk parameter, j=1, leads to two different scale factor solutions, one power law and the other exponential. The power-law solution corresponds to a model in which our universe expands with deceleration, while the exponential solution corresponds to a model in which it expands by accelerating. In this study, the cosmological consequences of such a selection of the jerk parameter on the non-minimally coupled f(R,T) theory of gravity (where R is the Ricci scalar, and T is the trace of the energy&ndash;momentum tensor) and the dynamic properties of these models are investigated on a flat Friedmann&ndash;Lemaitre&ndash;Robertson&ndash;Walker backgfround.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14037

Authors: Siwaphiwe Jokweni Vijay Singh Aroonkumar Beesham

A locally rotationally symmetric Bianchi-I model filled with strange quark matter is explored in f(R,T)=R+2f(T) gravity, where R is the Ricci scalar, T is the trace of the energy-momentum tensor and &lambda; is an arbitrary constant. Exact solutions are obtained by assuming that the expansion scalar is proportional to the shear scalar. The model is found to be physically viable for &lambda;&lt;&minus;14. Strange quark matter at early times mimics ultra-relativistic radiation whereas at late times it behaves as dust, quintessence, or even the cosmological constant for some specified values of &lambda;. The effective matter acts as stiff matter irrespective of the matter content and of f(R,T) gravity. The model is shear-free at late times but remains anisotropic throughout the evolution.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14044

Authors: Vsevolod Ivanov Sergei Ketov Ekaterina Pozdeeva Sergey Vernov

We propose inflationary models that are one-parametric generalizations of the Starobinsky R+R2 model. Using the conformal transformation, we obtain scalar field potentials in the Einstein frame that are one-parametric generalizations of the potential for the Starobinsky inflationary model. We restrict the form of the potentials by demanding that the corresponding function F(R) is an elementary function. We obtain the inflationary parameters of the models proposed and show that the predictions of these models agree with current observational data.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14036

Authors: Ekaterina Pozdeeva Sergei Ketov Sergey Vernov

We study the Starobinsky&ndash;Bel&ndash;Robinson inflationary model in the slow-roll regime. In the framework of higher-curvature corrections to inflationary parameters, we estimate the maximal possible value of the dimensionless positive coupling constant &beta; coming from M-theory.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14026

Authors: Mohammed B. Al-Fadhli

The G2 gas cloud motion data and the scarcity of observations on the event horizon-scale distances have challenged the comprehensiveness of the central supermassive black hole model. In addition, the recent Planck Legacy 2018 release has confirmed the existence of an enhanced lensing amplitude in the cosmic microwave background power spectra, which prefers a positively curved early Universe with a confidence level higher than 99%. This study investigates the impact of the background curvature and its evolution over conformal time on the formation and morphological evolution of central compact objects and the consequent effect on their host galaxies. The formation of a galaxy from the collapse of a supermassive gas cloud in the early Universe is modelled based on interaction field equations as a 4D relativistic cloud-world that flows and spins through a 4D conformal bulk of a primordial positive curvature considering the preference of the Planck release. Owing to the curved background, the derived model reveal that the galaxy and its core are formed at the same process by undergoing a forced vortex formation with a central event horizon leading to opposite vortices (traversable wormholes) that spatially shrink while evolving in the conformal time. The model shows that the accretion flow into the supermassive compact objects only occurs at the central event horizon of the two opposite vortices while their other ends eject relativistic jets. The simulation of the early bulk curvature evolution into the present spatial flatness demonstrated the fast orbital speed of outer stars owing to external fields exerted on galaxies. Furthermore, the gravitational potential of the early curved bulk contributes to galaxy formation while the present spatial flatness deprives the bulk potential which can contribute to galaxy quenching. Accordingly, the model can explain the relativistic jet generation and the G2 gas cloud motion if its orbit is around one of the vortices but at a distance from the central event horizon. Finally, the formation of a galaxy and its core simultaneously could elucidate the growth of the supermassive compact galaxy cores to a mass of ~109&nbsp;M&#8857; at &le;6% of the current Universe age.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14031

Authors: Igor Bulyzhenkov

The metric self-organization of matterspace&ndash;time implies a nonlocal correlation of its affine connections and the fulfillment of the volumetric conservation of energy&ndash;momentum under shifts in coordinate time. Geodesic forces or accelerations in metric fields of general relativity correspond to local pushes by the Lomonosov gravitational liquid but not to the retarded interactions between distant bodies. The mathematics of Russian Cosmism for the monistic all-unity of ethereal matter&ndash;space with the continuous distribution of mass&ndash;energy replaces Newtonian gravity &lsquo;from there to here&rsquo; with the local kinetic stresses &lsquo;from here to there&rsquo; due to the spatial asymmetry of inertial densities within a nonlocal whole. The inverse square law for ethereal pushes of concentrated (visible) masses can be controlled locally by a subtle resonant intervention into their polarized densities.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14019

Authors: Siva Mythili Gonuguntla

Pauli established the standard view that the spin of the electron was a completely abstract non-classical angular momentum that could not be thought of as the rotation of anything. Here, we give a pedagogical presentation of old work by Belifante (1939), recently updated by Ohanian (1986), which shows that, contrary to Pauli&rsquo;s edict, the spin of the electron can be viewed as the rotational angular momentum in the wave field of the electron.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14033

Authors: Rémy Koskas Jean-Michel Alimi

Halo Dark Matter (DM) formation is a complex process, intertwining both gravitational and cosmological nonlinear phenomena. One of the manifestations of this complexity is the shape of the resulting present-day DM halos: simulations and observations show that they are triaxial objects. Interestingly, those shapes carry cosmological information. We prove that cosmology, and particularly the dark energy model, leaves a lasting trace on the present-day halos and their properties: the overall shape of the DM halo exhibits a different behavior when the DE model is varied. We explain how that can be used to literally &ldquo;read&rdquo; the fully nonlinear power spectrum within the halos&rsquo; shape at z=0. To that end, we worked with &ldquo;Dark Energy Universe Simulations&rdquo; DM halos: DM halos are grewed in three different dark energy models, whose parameters were chosen in agreement with both CMB and SN Ia data.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14030

Authors: Balázs Bradák Roland Novák Christopher Gomez

Wispy Terrain, with its chasmata, is one of the enigmatic regions of Dione. It consists of quasi-parallel graben, and troughs, in parts with horsts, indicating extensional and shear stresses. This study introduces some observations of compression-related features and proposes a new regional formation model. The study of the relationship between impact craters and tectonic features revealed certain &ldquo;lost&rdquo; parts of some crosscut craters, indicating additional cryotectonic features, the appearance of accretionary prism-like phenomena, and, theoretically, subsumption-like processes. This study provides new information about the surface renewal processes at one of the youngest and probably still active regions of Dione.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14020

Authors: Roland Novak Balazs Bradak Jozsef Kovacs Christopher Gomez

In this work, we examined characteristics of the currently confirmed exoplanet population in order to characterize some of the crucial parameters for ocean formation. Two correlation heatmaps were created: one for the exoplanets in general, and one for exoplanets that can be found in the habitable zone according to calculations. Based on these, we found possible associations between planetary radius/mass, stellar metallicity, and multiple characteristics. We propose plans for further studies of possible proxies for exoplanetary ocean exploration.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14024

Authors: Hayet Sahi Amal Ait El Djoudi

This work deals with the deconfinement phase transition from a hadronic gas (HG) phase consisting of massive pions, to a quark&ndash;gluon plasma (QGP) phase consisting of gluons, massless up and down quarks and massive strange quarks, in addition to their antiquarks. Based on the Bag and coexistence models, we study the variations of pressure characterizing both HG and QGP phases. For the latter, we calculate the partition function of the color-singlet QGP within the projection method using a density of states containing the volume term only. We investigate the phase diagram of the strongly interacting matter, in the &micro;&ndash;T plane, in several cases: in the HG phase, we consider massless pions then we account for their masses, and in the QGP phase, first we take it consisting of two massless u and d quarks, then we consider additional massive strange quarks.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14028

Authors: OV Kiren Kenath Arun Chandra Sivaram KT Paul

Here, we discuss the possibility of the admixture of baryons to the DM primordial planets, with the DM particles varying in mass from 20 GeV to 100 GeV. We have considered different fractions of admixture particles to form the planet. The mass of the primordial planet made completely of DM ranges from asteroid mass to Neptune mass. However, the mass of primordial planets (admixed with DM and baryonic matter) is found to increase with the fraction of baryonic matter in the planets, and the mass of these objects can go well beyond the mass of Jupiter (around 40 times Jupiter&rsquo;s mass) and can also approach sub-stellar mass (brown dwarf mass). So far, thousands of exoplanets have been discovered by the Kepler mission and more will be found by NASA&rsquo;s Transiting Exoplanet Survey Satellite (TESS) mission, which is observing the entire sky to locate planets orbiting the nearest and brightest stars. Many exoplanets, such as exo-Jupiter, discovered so far fall in this mass range, and unsure whether these exoplanets are entirely made of baryons. Some of the exoplanets with a mass several times Jupiter&rsquo;s mass could be possible signatures of the presence of primordial planets with an admixture of baryonic and DM particles. It is also found that some of these planets could reach even sub-stellar mass (1032 g), such as that of a brown dwarf. Additionally, even if a small fraction of DM particles is trapped in these objects, the flux of ambient DM particles would be reduced significantly. This could be one of the many reasons for not detecting the DM particles in various experiments, such as XENON1T, etc., as suggested earlier. If two such primordial planets (in a binary system) merge, they will release a lot of energy. The energy released in gravitational waves, as well as the time scale of the merger of these objects, is found to increase with the mass of primordial objects. The frequency of gravitational waves emitted in these systems is matching within the range of LIGO. The objects near the galactic center could consist of such primordial objects, planets, comets, etc. We also discuss the possibility of the tidal break up of these primordial objects in the presence of a BH. The mass of BH required for tidal break up is calculated, and it is found that the mass of BH required for tidal break up increases with the DM particle mass and also with the increase in the fraction of baryons in these objects. The energy released during tidal breakup will be emitted as gravitational waves. The energy released, as well as the frequency of waves, is tabulated, and the frequency is in the sensitivity range of LIGO.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14010

Authors: Balázs Bradák Mayuko Nishikawa Christopher Gomez

The study introduces a theory about a giant impact on the surface of Dione. Our study suspects a relatively low-velocity (&le;5 km/s) collision between a c.a. 50&ndash;80 km diameter object and Dione, which might have resulted in the resurfacing of its intermediate cratered terrain. The source of the impactor might have been a unique satellite-centric debris, a unique impactor population, suspected in the Saturnian system. Other possible candidates are asteroid(s) appearing during the outer Solar System heavy bombardment period, or a collision, which might have happened during the &ldquo;giant impact phase&rdquo; in the early Saturnian system (coinciding with the Late Heavy Bombardment, or not).

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14021

Authors: Louise Rebecca Arun Kenath Chandra Sivaram

It is well established from various pieces of observational evidence that the relative abundance of baryonic matter in the Universe is less than 5%. The remaining 95% is made up of dark matter (DM) and dark energy. In view of the negative results from dark matter detection experiments running for several years, we had earlier proposed alternate models (which do not require DM) by postulating a minimal field strength (analogous to minimal curvature) and a minimal acceleration. These postulates led to the Modification of Newtonian Dynamics (MOND) and Modification of Newtonian Gravity (MONG), respectively. Some of the independent results that support the existence of non-baryonic matter are the mass&ndash;radius relation (that holds true for any gravitationally bound large-scale structure), Eddington luminosity, etc. Here, we discuss how these physical implications can be accounted for from the results of MONG without invoking DM.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14022

Authors: Valentin Allard Nicolas Chamel

The presence of currents in the interior of cold neutron stars can lead to a state in which nucleons remain superfluid while the quasiparticle energy spectrum has no gap. We show within the self-consistent time-dependent nuclear energy density functional theory that the nucleon specific heat is then comparable to that in the normal phase, contrasting with the classical BCS result in the absence of super flows. This dynamical, gapless superfluid state has important implications for the cooling of neutron stars.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14015

Authors: Stavros Nonis Antonios Leisos Apostolos Tsirigotis

Astroneu is an array of autonomous extensive air shower detection stations deployed at the Hellenic Open University (HOU) campus on the outskirts of Patras in Western Greece. In the first phase of operation, nine scintillators detectors and three radio frequency (RF) antennas have been installed and operated at the site. The detector units were arranged in three autonomous stations each consisting of three scintillator detectors (SDM) and one RF antenna. In the second phase of operation, three more antennas were deployed at one station in order to study the correlation of the RF signals from four antennas subject to the same shower event. In this report, we present the standard offline SDM-RF data and simulations analysis, the main research results concerning the reconstruction of the EAS parameters as well as the prospects of a new compact array that will be deployed by 2023.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14025

Authors: Sergey L. Cherkas Vladimir L. Kalashnikov

Dark matter in the Milky Way is explained by the F-type of vacuum polarization, which could represent dark radiation. A nonsingular solution for dark radiation exists in the presence of eicheon (i.e., black hole in old terminology) in the galaxy&rsquo;s center. The model is spherically symmetric, but an approximate surface density of a baryonic galaxy disk is taken into account by smearing the disk over a sphere.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14018

Authors: Kenath Arun Chandra Sivaram Avijeet Prasad

One of the biggest challenges in modern physics is how to unify gravity with quantum theory. There is an absence of a complete quantum theory of gravity, and conventionally it is thought that the effects of quantum gravity occur only at high energies (Planck scale). Here, we suggest that certain novel quantum effects of gravity can become significant even at lower energies and could be tested at laboratory scales. We also suggest a few indirect effects of dark energy that can show up at laboratory scales. Using these ideas, we set observational constraints on radio recombination lines of the Rydberg atoms. We further suggest that high-precision measurements of Casimir effects for smaller plate separation could also show some manifestations of the presence of dark energy.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14011

Authors: Paul O’Hara

Can we accurately model the spin state of a quantum particle? If so, we should be able to make identical copies of such a state and also obtain its mirror image. In quantum mechanics, many subatomic particles can form entangled pairs that are mirror images of each other, although the state of an individual particle cannot be duplicated or cloned as experimentally demonstrated by Aspect, Clauser and Zeilinger, the winners of the Nobel Prize in Physics 2022. We show that there is a higher-order symmetry associated with the SL(2,C) group that underlies the singlet state, which means that the singlet pairing preserves Lorentz transformations independently of the metric used. The Pauli exclusion principle can be derived from this symmetry.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14017

Authors: Nikolai N. Shchechilin John M. Pearson Nicolas Chamel

The densest part of neutron star crusts may contain very exotic nuclear configurations, so-called nuclear pasta. We investigate the effect of nuclear symmetry energy on the existence of such phases in cold non-accreting neutron stars. For this purpose, we apply three Brussels&ndash;Montreal functionals based on generalized Skyrme effective interactions, whose parameters were accurately calibrated to reproduce both experimental data on nuclei and realistic neutron-matter equations of state. These functionals differ in their predictions for the density dependence of the symmetry energy. Within the fourth-order extended Thomas&ndash;Fermi method, we find that pasta occupies a wider region of the crust for models with a lower slope of the symmetry energy (and higher symmetry energy at relevant densities) in agreement with previous studies based on pure Thomas&ndash;Fermi approximation and compressible liquid-drop models. However, the incorporation of microscopic corrections consistently calculated with the Strutinsky integral method leads to a significant shift of the onset of the pasta phases to higher densities due to the enhanced stability of spherical clusters. As a result, the pasta region shrinks substantially and the role of symmetry energy weakens. This study sheds light on the importance of quantum effects for reliably describing pasta phases in neutron stars.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14016

Authors: Vadim Monakhov

We have proven that, under the standard charge conjugation approach, the Majorana mass term in QFT must vanish. We have derived formulas for the Majorana spinor field operator without any assumptions about the second quantization procedure. The fact that the Majorana mass term vanishes not only in the c-theory, which was known, but also in the q-theory (the theory of second quantization), requires a revision of ideas about the generation of neutrino mass using the seesaw mechanism.

]]>Physical Sciences Forum doi: 10.3390/ECU2023-14023

Authors: Loïc Perot Nicolas Chamel

Space-based gravitational-wave detectors, such as the Laser Interferometer Space Antenna, allow for the probing of the interior of white dwarfs in binaries through the imprints of tidal effects on the gravitational wave signal. In this study, we have computed the tidal deformability of white dwarfs in full general relativity, taking into account the crystallization of their core. The elasticity of the core is found to systematically reduce the tidal deformability, especially for low-mass stars. Moreover, it is shown that errors on the tidal deformability due to the use of the Newtonian theory can become important for massive white dwarfs. Finally, the orbital evolution of eccentric binaries is investigated. Measuring the precession rate of these systems could provide estimations of the individual masses. However, it is found that the neglect of crystallization could lead to very large errors.

]]>Physical Sciences Forum doi: 10.3390/psf2022005050

Authors: Fabrizia Guglielmetti Philipp Arras Michele Delli Veneri Torsten Enßlin Giuseppe Longo Lukasz Tychoniec Eric Villard

The Atacama large millimeter/submillimeter array with the planned electronic upgrades will deliver an unprecedented number of deep and high resolution observations. Wider fields of view are possible with the consequential cost of image reconstruction. Alternatives to commonly used applications in image processing have to be sought and tested. Advanced image reconstruction methods are critical to meet the data requirements needed for operational purposes. Astrostatistics and astroinformatics techniques are employed. Evidence is given that these interdisciplinary fields of study applied to synthesis imaging meet the Big Data challenges and have the potential to enable new scientific discoveries in radio astronomy and astrophysics.

]]>Physical Sciences Forum doi: 10.3390/psf2022005049

Authors: Shlomo Dubnov Vignesh Gokul Gerard Assayag

Machine improvisation is the ability of musical generative systems to interact with either another music agent or a human improviser. This is a challenging task, as it is not trivial to define a quantitative measure that evaluates the creativity of the musical agent. It is also not feasible to create huge paired corpora of agents interacting with each other to train a critic system. In this paper we consider the problem of controlling machine improvisation by switching between several pre-trained models by finding the best match to an external control signal. We introduce a measure SymTE that searches for the best transfer entropy between representations of the generated and control signals over multiple generative models.

]]>Physical Sciences Forum doi: 10.3390/psf2022005048

Authors: Seyedeh Azadeh Fallah Mortezanejad Ali Mohammad-Djafari

In many Bayesian computations, we first obtain the expression of the joint distribution of all the unknown variables given the observed data. In general, this expression is not separable in those variables. Thus, obtaining the marginals for each variable and computing the expectations is difficult and costly. This problem becomes even more difficult in high dimensional quandaries, which is an important issue in inverse problems. We may then try to propose a surrogate expression with which we can carry out approximate computations. Often, a separable expression approximation can be useful enough. The variational Bayesian approximation (VBA) is a technique that approximates the joint distribution p with an easier, for example separable, distribution q by minimizing the Kullback&ndash;Leibler divergence KL(q|p). When q is separable in all the variables, the approximation is also called the mean field approximation (MFA), and so q is the product of the approximated marginals. A first standard and general algorithm is the alternate optimization of KL(q|p) with respect to qi. A second general approach is its optimization in the Riemannian manifold. However, in this paper, for practical reasons, we consider the case where p is in the exponential family and so is q. For this case, KL(q|p) becomes a function of the parameters &theta; of the exponential family. Then, we can use any other optimization algorithm to obtain those parameters. In this paper, we compare three optimization algorithms, namely a standard alternate optimization, a gradient-based algorithm and a natural gradient algorithm, and study their relative performances in three examples.

]]>Physical Sciences Forum doi: 10.3390/psf2022005047

Authors: Philippe Jacquet

One of the key issues in machine learning is the characterization of the learnability of a problem. Regret is a way to quantify learnability. Quantum tomography is a special case of machine learning where the training set is a set of quantum measurements and the ground truth is the result of these measurements, but nothing is known about the hidden quantum system. We will show that in some case quantum tomography is a hard problem to learn. We consider a problem related to optical fiber communication where information is encoded in photon polarizations. We will show that the learning regret cannot decay faster than 1/T where T is the size of the training dataset and that incremental gradient descent may converge worse.

]]>Physical Sciences Forum doi: 10.3390/psf2022005046

Authors: Johannes Buchner

Bayesian inference with nested sampling requires a likelihood-restricted prior sampling method, which draws samples from the prior distribution that exceed a likelihood threshold. For high-dimensional problems, Markov Chain Monte Carlo derivatives have been proposed. We numerically study ten algorithms based on slice sampling, hit-and-run and differential evolution algorithms in ellipsoidal, non-ellipsoidal and non-convex problems from 2 to 100 dimensions. Mixing capabilities are evaluated with the nested sampling shrinkage test. This makes our results valid independent of how heavy-tailed the posteriors are. Given the same number of steps, slice sampling is outperformed by hit-and-run and whitened slice sampling, while whitened hit-and-run does not provide results that are as good. Proposing along differential vectors of live point pairs also leads to the highest efficiencies and appears promising for multi-modal problems. The tested proposals are implemented in the UltraNest nested sampling package, enabling efficient low and high-dimensional inference of a large class of practical inference problems relevant to astronomy, cosmology, particle physics and astronomy.

]]>Physical Sciences Forum doi: 10.3390/psf2022005045

Authors: Tahereh Najafi Rosmina Jaafar Rabani Remli Wan Asyraf Wan Zaidi Kalaivani Chellappan

Epilepsy is a multiscale disease in which small alterations at the cellular scale affect the electroencephalogram (EEG). We use a computational model to bridge the cellular scale to EEG by evaluating the ionic conductance of the Hodkin&ndash;Huxley (HH) membrane model and comparing the EEG in response to intermittent photic stimulation (IPS) for epilepsy and normal subjects. Modeling is sectioned into IPS encoding, determination of an LTI system, and modifying ionic conductance to generate epilepsy signals. Machine learning is employed where it results in 0.6 (mScm2) ionic conductance in epilepsy. This ionic conductance is lower than the unitary conductance for normal subjects.

]]>Physical Sciences Forum doi: 10.3390/psf2022005044

Authors: Mariela Portesi Juan Manuel Pujol Federico Holik

We study reciprocity relations between fluctuations of the probability distributions corresponding to position and momentum, and other observables, in quantum theory. These kinds of relations have been previously studied in terms of quantifiers based on the Lipschitz constants of the concomitant distributions. However, it turned out that they were not valid for all states. Here, we ask the following question: can those relations be described using other quantifiers? By appealing to the Fisher information, we study reciprocity relations for different families of states. In particular, we look for a connection of this problem with previous works.

]]>Physical Sciences Forum doi: 10.3390/psf2022005042

Authors: Antoine Bourget

Quivers are oriented graphs that have profound connections to various areas of mathematics, including representation theory and geometry. Quiver representations correspond to a vast generalization of classical linear algebra problems. The geometry of these representations can be described in the framework of Hamiltonian reduction and geometric invariant theory, giving rise to the concept of quiver variety. In parallel to these developments, quivers have appeared to naturally encode certain supersymmetric quantum field theories. The associated quiver variety then corresponds to a part of the moduli space of vacua of the theory. However, physics tells us that another natural geometric object associated with quivers exists, which can be seen as a magnetic analog of the (electric) quiver variety. When viewed from that angle, magnetic quivers are a new tool, developed in the past decade, that help mathematicians and physicists alike to understand geometric spaces. This note is the writeup of a talk in which I review these developments from both the mathematical and physical perspective, emphasizing the dialogue between the two communities.

]]>Physical Sciences Forum doi: 10.3390/psf2022005041

Authors: Andrée De Backer Abdelkader Souidi Etienne A. Hodille Emmanuel Autissier Cécile Genevois Farah Haddad Antonin Della Noce Christophe Domain Charlotte S. Becquart Marie France Barthe

Materials in fission reactors or fusion tokamaks are exposed to neutron irradiation, which creates defects in the microstructure. With time, depending on the temperature, defects diffuse and form, among others, nanocavities, altering the material performance. The goal of this work is to determine the diffusion properties of the nanocavities in tungsten. We combine (i) a systematic experimental study in irradiated samples annealed at different temperatures up to 1800 K (the created nanocavities diffuse, and their coalescence is studied by transmission electron microscopy); (ii) our object kinetic Monte Carlo model of the microstructure evolution fed by a large collection of atomistic data; and (iii) a multi-objective optimization method (using model inversion) to obtain the diffusion of nanocavities, input parameters of our model, from the comparison with the experimental observations. We simplify the multi-objective function, proposing a projection into the parameter space. Non-dominated solutions are revealed: two &ldquo;valleys&rdquo; of minima corresponding to the nanocavities density and size objectives, respectively, which delimit the Pareto optimal solution. These &ldquo;valleys&rdquo; are found to be the upper and lower uncertainties on the diffusion beyond the uncertainties on the experimental and simulated results. The nanocavity diffusion can be split in three domains: the mono vacancy and small vacancy clusters, for which atomistic models are affordable, the small nanocavities for which our approach is decisive, and the nanocavities larger than 1.5 nm for which the classical surface diffusion theory is valid.

]]>Physical Sciences Forum doi: 10.3390/psf2022005040

Authors: John Skilling Kevin H. Knuth

As physicists, we wish to make mental models of the world around us. For this to be useful, we need to be able to classify features of the world into symbols and develop a rational calculus for their manipulation. In seeking maximal generality, we aim for minimal restrictive assumptions. That inquiry starts by developing basic arithmetic and proceeds to develop the formalism of quantum theory and relativity.

]]>Physical Sciences Forum doi: 10.3390/psf2022005039

Authors: Viktoria Kainz Céline Bœhm Sonja Utz Torsten Enßlin

Social communication is omnipresent and a fundamental basis of our daily lives. Especially due to the increasing popularity of social media, communication flows are becoming more complex, faster and more influential. It is therefore not surprising that in these highly dynamic communication structures, strategies are also developed to spread certain opinions, to deliberately steer discussions or to inject misinformation. The reputation game is an agent-based simulation that uses information theoretical principles to model the effect of such malicious behavior taking reputation dynamics as an example. So far, only small groups of 3 to 5 agents have been studied, whereas now, we extend the reputation game to larger groups of up to 50 agents, also including one-to-many conversations. In this setup, the resulting group dynamics are examined, with particular emphasis on the emerging network topology and the influence of agents&rsquo; personal characteristics thereon. In the long term, the reputation game should thus help to determine relations between the arising communication network structure, the used communication strategies and the recipients&rsquo; behavior, allowing us to identify potentially harmful communication patterns, e.g., in social media.

]]>Physical Sciences Forum doi: 10.3390/psf2022005038

Authors: Matthias Cléry Laurent Mazliak

In 1928, the Henri Poincar&eacute; Institute opened in Paris thanks to the efforts of the mathematician Emile Borel and the support of the Rockefeller Foundation. Teaching and research on the mathematics of chance were placed by Borel at the center of the institute&rsquo;s activity, a result imposed by the French mathematicians in the face of indifference and even hostility towards a discipline accused of a lack of seriousness. This historical account, based in large part on the results of Matthias Cl&eacute;ry&rsquo;s thesis, presents the way in which Borel became convinced of the importance of making up for the gap between France and other countries as regards the place of probability and statistics in the educational system, and elaborates the strategy that led to the creation of the IHP and how its voluntarist functioning enabled it to become in ten years one of the main world centers of reflection on this subject.

]]>Physical Sciences Forum doi: 10.3390/psf2022005037

Authors: Qiao Huang Jean-Claude Zambrini

This paper summarises a new framework of Stochastic Geometric Mechanics that attributes a fundamental role to Hamilton&ndash;Jacobi&ndash;Bellman (HJB) equations. These are associated with geometric versions of probabilistic Lagrangian and Hamiltonian mechanics. Our method uses tools of the &ldquo;second-order differential geometry&rdquo;, due to L. Schwartz and P.-A. Meyer, which may be interpreted as a probabilistic counterpart of the canonical quantization procedure for geometric structures of classical mechanics. The inspiration for our results comes from what is called &ldquo;Schr&ouml;dinger&rsquo;s problem&rdquo; in Stochastic Optimal Transport theory, as well as from the hydrodynamical interpretation of quantum mechanics. Our general framework, however, should also be relevant in Machine Learning and other fields where HJB equations play a key role.

]]>Physical Sciences Forum doi: 10.3390/psf2022005035

Authors: Roland Preuss Udo von Toussaint

Data for complex plasma&ndash;wall interactions require long-running and expensive computer simulations of codes like EIRENE or SOLPS. Furthermore, the number of input parameters is large, which results in a low coverage of the (physical) parameter space. Unpredictable occasions of outliers create a need to conduct the exploration of this multi-dimensional space using robust analysis tools. We restate the Gaussian-process (GP) method as a Bayesian adaptive exploration method for establishing surrogate surfaces in the variables of interest. On this basis, we complete the analysis by the Student-t process (TP) method in order to improve the robustness of the result with respect to outliers. The most obvious difference between both methods shows up in the marginal likelihood for the hyperparameters of the covariance function where the TP method features a broader marginal probability distribution in the presence of outliers.

]]>Physical Sciences Forum doi: 10.3390/psf2022005036

Authors: Ariel Caticha

The entropic dynamics (ED) approach to quantum mechanics is ideally suited to address the problem of measurement because it is based on entropic and Bayesian methods of inference that have been designed to process information and data. The approach succeeds because ED achieves a clear-cut separation between ontic and epistemic elements: positions are ontic, while probabilities and wave functions are epistemic. Thus, ED is a viable realist &psi;-epistemic model. Such models are widely assumed to be ruled out by various no-go theorems. We show that ED evades those theorems by adopting purely epistemic dynamics and denying the existence of an ontic dynamics at the subquantum level.

]]>Physical Sciences Forum doi: 10.3390/psf2022005034

Authors: Fabio Di Nocera

We discuss the geometric aspects of a recently described unfolding procedure and show the form of objects relevant in the field of quantum information geometry in the unfolding space. In particular, we show the form of the quantum monotone metric tensors characterized by Petz and retrace in this unfolded perspective a recently introduced procedure of extracting a covariant tensor from a relative g-entropy.

]]>Physical Sciences Forum doi: 10.3390/psf2022005028

Authors: Orestis Loukas Ho-Ryun Chung

Science aims at identifying suitable models that best describe a population based on a set of features. Lacking information about the relationships among features there is no justification to a priori fix a certain model. Ideally, we want to incorporate only those relationships into the model which are supported by observed data. To achieve this goal the model that best balances goodness of fit with simplicity should be selected. However, parametric approaches to model selection encounter difficulties pertaining to the precise definition of the invariant content that enters the selection procedure and its interpretation. A naturally invariant formulation of any statistical model consists of the joint distribution of features, which provides all the information that is required to answer questions in classification tasks or identification of feature relationships. The principle of Maximum Entropy (maxent) offers a framework to directly estimate a model for this joint distribution based on phenomenological constraints. Reformulating the inverse problem to obtain a model distribution as an under-constrained linear system of equations, where the remaining degrees of freedom are fixed by entropy maximization, tremendously simplifies large-N expansions around the optimal distribution of Maximum Entropy. We have exploited this conceptual advancement to clarify the nature of prominent model-selection schemes providing an approach to systematically select significant constraints evidenced by the data. To facilitate the treatment of higher-dimensional problems, we propose hypermaxent&mdash;a clustering method to efficiently tackle the maxent selection procedure. We demonstrate the utility of our approach by applying the advocated methodology to analyze long-range interactions from spin glasses and uncover three-point effects in COVID-19 data.

]]>Physical Sciences Forum doi: 10.3390/psf2022005032

Authors: George Jeffreys Siu-Cheong Lau

We find an application in quantum finite automata for the ideas and results of [JL21] and [JL22]. We reformulate quantum finite automata with multiple-time measurements using the algebraic notion of a near-ring. This gives a unified understanding towards quantum computing and deep learning. When the near-ring comes from a quiver, we have a nice moduli space of computing machines with a metric that can be optimized by gradient descent.

]]>Physical Sciences Forum doi: 10.3390/psf2022005033

Authors: Vincent Eberle Philipp Frank Julia Stadler Silvan Streit Torsten Enßlin

Bayesian imaging algorithms are becoming increasingly important in, e.g., astronomy, medicine and biology. Given that many of these algorithms compute iterative solutions to high-dimensional inverse problems, the efficiency and accuracy of the instrument response representation are of high importance for the imaging process. For this reason, point spread functions, which make up a large fraction of the response functions of telescopes and microscopes, are usually assumed to be spatially invariant in a given field of view and can thus be represented by a convolution. For many instruments, this assumption does not hold and degrades the accuracy of the instrument representation. Here, we discuss the application of butterfly transforms, which are linear neural network structures whose sizes scale subquadratically with the number of data points. Butterfly transforms are efficient by design, since they are inspired by the structure of the Cooley&ndash;Tukey Fast Fourier transform. In this work, we combine them in several ways into butterfly networks, compare the different architectures with respect to their performance and identify a representation that is suitable for the efficient respresentation of a synthetic spatially variant point spread function up to a 1% error.

]]>Physical Sciences Forum doi: 10.3390/psf2022005031

Authors: Geoffroy Delamare Ulisse Ferrari

The inverse Ising model is used in computational neuroscience to infer probability distributions of the synchronous activity of large neuronal populations. This method allows for finding the Boltzmann distribution with single neuron biases and pairwise interactions that maximize the entropy and reproduce the empirical statistics of the recorded neuronal activity. Here, we apply this strategy to large populations of retinal output neurons (ganglion cells) of different types, stimulated by multiple visual stimuli with their own statistics. The activity of retinal output neurons is driven by both the inputs from upstream neurons, which encode the visual information and reflect stimulus statistics, and the recurrent connections, which induce network effects. We first apply the standard inverse Ising model approach and show that it accounts well for the system&rsquo;s collective behavior when the input visual stimulus has short-ranged spatial correlations but fails for long-ranged ones. This happens because stimuli with long-ranged spatial correlations synchronize the activity of neurons over long distances. This effect cannot be accounted for by pairwise interactions, and so by the pairwise Ising model. To solve this issue, we apply a previously proposed framework that includes a temporal dependence in the single neurons biases to model how neurons are driven in time by the stimulus. Thanks to this addition, the stimulus effects are taken into account by the biases, and the pairwise interactions allow for the characterization of the network effect in the population activity and for reproducing the structure of the recurrent functional connections in the retinal architecture. In particular, the inferred interactions are strong and positive only for nearby neurons of the same type. Inter-type connections are instead small and slightly negative. Therefore, the retinal architecture splits into weakly interacting subpopulations composed of strongly interacting neurons. Overall, this temporal framework fixes the problems of the standard, static, inverse Ising model and accounts for the system&rsquo;s collective behavior, for stimuli with either short or long-range correlations.

]]>Physical Sciences Forum doi: 10.3390/psf2022005030

Authors: Olivier Rioul

In many areas of computer science, it is of primary importance to assess the randomness of a certain variable X. Many different criteria can be used to evaluate randomness, possibly after observing some disclosed data. A &ldquo;sufficiently random&rdquo; X is often described as &ldquo;entropic&rdquo;. Indeed, Shannon&rsquo;s entropy is known to provide a resistance criterion against modeling attacks. More generally one may consider the R&eacute;nyi &alpha;-entropy where Shannon&rsquo;s entropy, collision entropy and min-entropy are recovered as particular cases &alpha;=1, 2 and +&infin;, respectively. Guess work or guessing entropy is also of great interest in relation to &alpha;-entropy. On the other hand, many applications rely instead on the &ldquo;statistical distance&rdquo;, also known as &ldquo;total variation" distance, to the uniform distribution. This criterion is particularly important because a very small distance ensures that no statistical test can effectively distinguish between the actual distribution and the uniform distribution. In this paper, we establish optimal lower and upper bounds between &alpha;-entropy, guessing entropy on one hand, and error probability and total variation distance to the uniform on the other hand. In this context, it turns out that the best known &ldquo;Pinsker inequality&rdquo; and recent &ldquo;reverse Pinsker inequalities&rdquo; are not necessarily optimal. We recover or improve previous Fano-type and Pinsker-type inequalities used for several applications.

]]>Physical Sciences Forum doi: 10.3390/psf2022005026

Authors: Hamideh Manoochehri Seyed Ahmad Motamedi Ali Mohammad-Djafari Masrour Makaremi Alireza Vafaie Sadr

Accurate determination of skeletal maturation indicators is crucial in the orthodontic process. Chronologic age is not a reliable skeletal maturation indicator, thus physicians use bone age. In orthodontics, the treatment timing depends on Cervical Vertebral Maturation (CVM) assessment. Determination of CVM degree remains challenging due to the limited annotated dataset, the existence of significant irrelevant areas in the image, the huge intra-class variances, and the high degree of inter-class similarities. To address this problem, researchers have started looking for external information beyond current available medical datasets. This work utilizes the domain knowledge from radiologists to train neural network models that can be utilized as a decision support system. We proposed a novel supervised learning method with a multi-scale attention mechanism, and we incorporated the general diagnostic patterns of medical doctors to classify lateral X-ray images as six CVM classes. The proposed network highlights the important regions, surpasses the irrelevant part of the image, and efficiently models long-range dependencies. Employing the attention mechanism improves both the performance and interpretability. In this work, we used additive spatial and channel attention modules. Our proposed network consists of three branches. The first branch extracts local features, and creates attention maps and related masks, the second branch uses the masks to extract discriminative features for classification, and the third branch fuses local and global features. The result shows that the proposed method can represent more discriminative features, therefore, the accuracy of image classification is greater in comparison to in backbone and some attention-based state-of-the-art networks.

]]>Physical Sciences Forum doi: 10.3390/psf2022005024

Authors: Andrew Beckett

We summarise recent work on the classical result of Kirillov that any simply connected homogeneous symplectic space of a connected group G is a hamiltonian G^-space for a one-dimensional central extension G^ of G, and is thus (by a result of Kostant a cover of a coadjoint orbit of G^. We emphasise that existing proofs in the literature assume that G is simply connected and that this assumption can be removed by application of a theorem of Neeb. We also interpret Neeb&rsquo;s theorem as relating the integrability of one-dimensional central extensions of Lie algebras to the integrability of an associated Chevalley&ndash;Eilenberg 2-cocycle.

]]>Physical Sciences Forum doi: 10.3390/psf2022005027

Authors: Margret Westerkamp Igor V. Ovchinnikov Philipp Frank Torsten Enßlin

The inference of dynamical fields is of paramount importance in science, technology, and economics. Dynamical field inference can be based on information field theory and used to infer the evolution of fields in dynamical systems from finite data. Here, the partition function, as the central mathematical object of our investigation, invokes a Dirac delta function as well as a field-dependent functional determinant, which impede the inference. To tackle this problem, Fadeev&ndash;Popov ghosts and a Lagrange multiplier are introduced to represent the partition function by an integral over those fields. According to the supersymmetric theory of stochastics, the action associated with the partition function has a supersymmetry for those ghost and signal fields. In this context, the spontaneous breakdown of supersymmetry leads to chaotic behavior of the system. To demonstrate the impact of chaos, characterized by positive Lyapunov exponents, on the predictability of a system&rsquo;s evolution, we show for the case of idealized linear dynamics that the dynamical growth rates of the fermionic ghost fields impact the uncertainty of the field inference. Finally, by establishing perturbative solutions to the inference problem associated with an idealized nonlinear system, using a Feynman diagrammatic expansion, we expose that the fermionic contributions, implementing the functional determinant, are key to obtain the correct posterior of the system.

]]>Physical Sciences Forum doi: 10.3390/psf2022005029

Authors: François Verdeil Yannick Deville

Quantum process tomography (QPT) methods aim at identifying a given quantum process. QPT is a major quantum information processing tool, since it especially allows one to characterize the actual behavior of quantum gates, which are the building blocks of quantum computers. The present paper focuses on the estimation of a unitary process. This class is of particular interest because quantum mechanics postulates that the evolution of any closed quantum system is described by a unitary transformation. Unitary processes have significantly fewer parameters than general quantum processes (22nqb vs. 24nqb&minus;22nqb real independent parameters for nqb qubits). By assuming that the process is unitary we develop two methods that scale better with the size of the system. In the present paper, we stay as close as possible to the standard setup of QPT: the operator has to prepare copies of different input states. The properties those states have to satisfy in order for our method to achieve QPT are very mild. Therefore, we choose to operate with copies of 2nqb initially unknown pure input states. In order to perform QPT without knowing the input states, we perform measurements on half the copies of each state, and let the other half be transformed by the system before measuring them (each copy is only measured once). This setup has the advantage of removing the issue of systematic (i.e., same on all the copies of a state) errors entirely because it does not require the process input to take predefined values. We develop a straightforward analytical solution that first estimates the states from the averaged measurements and then finds the unitary matrix (representing the process) coherent with those estimates by using our analytical solution to an extended version of Wahba&rsquo;s problem. This estimate may then be used as an initial point for a fine tuning algorithm that maximizes the likelihood of the measurements. Simulation results show the effectiveness of the proposed methods.

]]>Physical Sciences Forum doi: 10.3390/psf2022005025

Authors: Adrian-Josue Guel-Cortez Eun-jin Kim

By combining information science and differential geometry, information geometry provides a geometric method to measure the differences in the time evolution of the statistical states in a stochastic process. Specifically, the so-called information length (the time integral of the information rate) describes the total amount of statistical changes that a time-varying probability distribution takes through time. In this work, we outline how the application of information geometry may permit us to create energetically efficient and organised behaviour artificially. Specifically, we demonstrate how nonlinear stochastic systems can be analysed by utilising the Laplace assumption to speed up the numerical computation of the information rate of stochastic dynamics. Then, we explore a modern control engineering protocol to obtain the minimum statistical variability while analysing its effects on the closed-loop system&rsquo;s stochastic thermodynamics.

]]>Physical Sciences Forum doi: 10.3390/psf2022005023

Authors: Xavier Brouty Matthieu Garcin

By using Brillouin&rsquo;s perspective on Maxwell&rsquo;s demon, we determine a new way to describe investor behaviors in financial markets. The efficient market hypothesis (EMH) in its strong form states that all information in the market, public or private, is accounted for in the stock price. By simulations in an agent-based model, we show that an informed investor using alternative data, correlated to the time series of prices of a financial asset, is able to act as a Maxwell&rsquo;s demon on financial markets. They are then able to perform statistical arbitrage consistently with the adaptive market hypothesis (AMH). A new statistical test of market efficiency provides some insight into the impact of the demon on the market. This test determines the amount of information contained in the series, using quantities which are widespread in information theory such as Shannon&rsquo;s entropy. As in Brillouin&rsquo;s perspective, we observe a cycle: Negentropy-&gt;Information-&gt;Negentropy. This cycle proves the implication of the investor depicted as a Maxwell&rsquo;s demon in the market with the knowledge of alternative data.

]]>Physical Sciences Forum doi: 10.3390/psf2022005022

Authors: Romke Bontekoe Barrie J. Stokes

In this tutorial paper the Gull&ndash;Skilling kangaroo problem is revisited. The problem is used as an example of solving an under-determined system by variational principles, the maximum entropy principle (MEP), and Information Geometry. The relationship between correlation and information is demonstrated. The Kullback&ndash;Leibler divergence of two discrete probability distributions is shown to fail as a distance measure. However, an analogy with rigid body rotations in classical mechanics is motivated. A table of proper &ldquo;geodesic&rdquo; distances between probability distributions is presented. With this paper the authors pay tribute to their late friend David Blower.

]]>Physical Sciences Forum doi: 10.3390/psf2022005021

Authors: Daisuke Tarama Jean-Pierre Françoise

A statistical transformation model consists of a smooth data manifold, on which a Lie group smoothly acts, together with a family of probability density functions on the data manifold parametrized by elements in the Lie group. For such a statistical transformation model, the Fisher&ndash;Rao semi-definite metric and the Amari&ndash;Chentsov cubic tensor are defined in the Lie group. If the family of probability density functions is invariant with respect to the Lie group action, the Fisher&ndash;Rao semi-definite metric and the Amari&ndash;Chentsov tensor are left-invariant, and hence we have a left-invariant structure of a statistical manifold. In the present work, the general framework of statistical transformation models is explained. Then, the left-invariant geodesic flow associated with the Fisher&ndash;Rao metric is considered for two specific families of probability density functions on the Lie group. The corresponding Euler&ndash;Poincar&eacute; and the Lie&ndash;Poisson equations are explicitly found in view of geometric mechanics. Related dynamical systems over Lie groups are also mentioned. A generalization in relation to the invariance of the family of probability density functions is further studied.

]]>Physical Sciences Forum doi: 10.3390/psf2022005020

Authors: Piotr Graczyk Hideyuki Ishi Bartosz Kołodziejek

We consider multivariate-centered Gaussian models for the random vector (Z1,&hellip;,Zp), whose conditional structure is described by a homogeneous graph and which is invariant under the action of a permutation subgroup. The following paper is concerned with model selection within colored graphical Gaussian models, when the underlying conditional dependency graph is known. We derive an analytic expression of the normalizing constant of the Diaconis&ndash;Ylvisaker conjugate prior for the precision parameter and perform Bayesian model selection in the class of graphical Gaussian models invariant by the action of a permutation subgroup. We illustrate our results with a toy example of dimension 5.

]]>Physical Sciences Forum doi: 10.3390/psf2022005019

Authors: Fábio C. C. Meneghetti Henrique K. Miyamoto Sueli I. R. Costa

A full-rank lattice in the Euclidean space is a discrete set formed by all integer linear combinations of a basis. Given a probability distribution on Rn, two operations can be induced by considering the quotient of the space by such a lattice: wrapping and quantization. For a lattice &Lambda;, and a fundamental domain D, which tiles Rn through &Lambda;, the wrapped distribution over the quotient is obtained by summing the density over each coset, while the quantized distribution over the lattice is defined by integrating over each fundamental domain translation. These operations define wrapped and quantized random variables over D and &Lambda;, respectively, which sum up to the original random variable. We investigate information-theoretic properties of this decomposition, such as entropy, mutual information and the Fisher information matrix, and show that it naturally generalizes to the more abstract context of locally compact topological groups.

]]>Physical Sciences Forum doi: 10.3390/psf2022005017

Authors: Pierre-Yves Lagrave Frédéric Barbaresco

This paper introduces an adaptive importance sampling scheme for the computation of group-based convolutions, a key step in the implementation of equivariant neural networks. By leveraging information geometry to define the parameters update rule for inferring the optimal sampling distribution, we show promising results for our approach by working with the two-dimensional rotation group SO(2) and von Mises distributions. Finally, we position our AIS scheme with respect to quantum algorithms for computing Monte Carlo estimations.

]]>Physical Sciences Forum doi: 10.3390/psf2022005018

Authors: Elham Taghizadeh Ali Mohammad-Djafari

In this paper, we consider the SEIR (Susceptible-Exposed-Infectious-Removed) model for studying COVID-19. The main contributions of this paper are: (i) a detailed explanation of the SEIR model, with the significance of its parameters. (ii) calibration and estimation of the parameters of the model using the observed data. To do this, we used a nonlinear least squares (NLS) optimization and a Bayesian estimation method. (iii) When the parameters are estimated, we use the models for the prediction of the spread of the virus and compute the probable number of infections and deaths of individuals. (iii) We show the performances of the proposed method on simulated and real data. (iv) Remarking that the fixed parameter model could not give satisfactory results on real data, we proposed the use of a time-dependent parameter model. Then, this model is implemented and used on real data.

]]>