Next Issue
Volume 12, March
Previous Issue
Volume 12, January
 
 

Computation, Volume 12, Issue 2 (February 2024) – 19 articles

Cover Story (view full-size image): The construction of accurate surrogates operating in high-dimensional spaces needs advanced sampling strategies enabling active learning to ensure high accuracy while reducing the samples involved in the design of experiments. Here, the Fisher information matrix is combined with the sparse proper generalized decomposition to define a new active learning informativeness criterion in high dimensions. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
11 pages, 1580 KiB  
Article
The Mechanism of Resonant Amplification of One-Dimensional Detonation Propagating in a Non-Uniform Mixture
by Alexander Lopato and Pavel Utkin
Computation 2024, 12(2), 37; https://doi.org/10.3390/computation12020037 - 17 Feb 2024
Viewed by 1131
Abstract
The propagation of detonation waves (i.e., supersonic combustion waves) in non-uniform gaseous mixtures has become a matter of interest over the past several years due to the development of rotating detonation engines. It was shown in a number of recent theoretical studies of [...] Read more.
The propagation of detonation waves (i.e., supersonic combustion waves) in non-uniform gaseous mixtures has become a matter of interest over the past several years due to the development of rotating detonation engines. It was shown in a number of recent theoretical studies of one-dimensional pulsating detonation that perturbation of the parameters in front of the detonation wave can lead to a resonant amplification of intrinsic pulsations for a certain range of perturbation wavelengths. This work is dedicated to the clarification of the mechanism of this effect. One-dimensional reactive Euler equations with single-step Arrhenius kinetics were solved. Detonation propagation in a gas with sine waves in density was simulated in a shock-attached frame of reference. We carried out a series of simulations, varying the wavelength of the disturbances. We obtained a non-linear dependence of the amplitude of these pulsations on the wavelength of disturbances with resonant amplification for a certain range of wavelengths. The gain in velocity was about 25% of the Chapman–Jouguet velocity of the stable detonation wave. The effect is explained using the characteristic analysis in the x-t diagram. For the resonant case, we correlated the pulsation period with the time it takes for the C+ and C characteristics to travel through the effective reaction zone. A similar pulsation mechanism is realized when a detonation wave propagates in a homogeneous medium. Full article
(This article belongs to the Special Issue Recent Advances in Numerical Simulation of Compressible Flows)
Show Figures

Figure 1

13 pages, 864 KiB  
Article
Injury Patterns and Impact on Performance in the NBA League Using Sports Analytics
by Vangelis Sarlis, George Papageorgiou and Christos Tjortjis
Computation 2024, 12(2), 36; https://doi.org/10.3390/computation12020036 - 16 Feb 2024
Cited by 1 | Viewed by 2047
Abstract
This research paper examines Sports Analytics, focusing on injury patterns in the National Basketball Association (NBA) and their impact on players’ performance. It employs a unique dataset to identify common NBA injuries, determine the most affected anatomical areas, and analyze how these injuries [...] Read more.
This research paper examines Sports Analytics, focusing on injury patterns in the National Basketball Association (NBA) and their impact on players’ performance. It employs a unique dataset to identify common NBA injuries, determine the most affected anatomical areas, and analyze how these injuries influence players’ post-recovery performance. This study’s novelty lies in its integrative approach that combines injury data with performance metrics and salary data, providing new insights into the relationship between injuries and economic and on-court performance. It investigates the periodicity and seasonality of injuries, seeking patterns related to time and external factors. Additionally, it examines the effect of specific injuries on players’ per-match analytics and performance, offering perspectives on the implications of injury rehabilitation for player performance. This paper contributes significantly to sports analytics, assisting coaches, sports medicine professionals, and team management in developing injury prevention strategies, optimizing player rotations, and creating targeted rehabilitation plans. Its findings illuminate the interplay between injuries, salaries, and performance in the NBA, aiming to enhance player welfare and the league’s overall competitiveness. With a comprehensive and sophisticated analysis, this research offers unprecedented insights into the dynamics of injuries and their long-term effects on athletes. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

29 pages, 10558 KiB  
Article
Topology Optimization and Efficiency Evaluation of Short-Fiber-Reinforced Composite Structures Considering Anisotropy
by Evgenii Kurkin, Oscar Ulises Espinosa Barcenas, Evgenii Kishov and Oleg Lukyanov
Computation 2024, 12(2), 35; https://doi.org/10.3390/computation12020035 - 12 Feb 2024
Viewed by 1648
Abstract
The current study aims to develop a methodology for obtaining topology-optimal structures made of short fiber-reinforced polymers. Each iteration of topology optimization involves two consecutive steps: the first is a simulation of the injection molding process for obtaining the fiber orientation tensor, and [...] Read more.
The current study aims to develop a methodology for obtaining topology-optimal structures made of short fiber-reinforced polymers. Each iteration of topology optimization involves two consecutive steps: the first is a simulation of the injection molding process for obtaining the fiber orientation tensor, and the second is a structural analysis with anisotropic material properties. Accounting for the molding process during the internal iterations of topology optimization makes it possible to enhance the weight efficiency of structures—a crucial aspect, especially in aerospace. Anisotropy is considered through the fiber orientation tensor, which is modeled by solving the plastic molding equations for non-Newtonian fluids and then introduced as a variable in the stiffness matrix during the structural analysis. Structural analysis using a linear anisotropic material model was employed within the topology optimization. For verification, a non-linear elasto-plastic material model was used based on an exponential-and-linear hardening law. The evaluation of weight efficiency in structures composed of short-reinforced composite materials using a dimensionless criterion is addressed. Experimental verification was performed to confirm the validity of the developed methodology. The evidence illustrates that considering anisotropy leads to stiffer structures, and structural elements should be oriented in the direction of maximal stiffness. The load-carrying factor is expressed in terms of failure criteria. The presented multidisciplinary methodology can be used to improve the quality of the design of structures made of short fiber-reinforced composites (SFRC), where high stiffness, high strength, and minimum mass are the primary required structural characteristics. Full article
Show Figures

Figure 1

15 pages, 1448 KiB  
Article
Modeling and Simulating an Epidemic in Two Dimensions with an Application Regarding COVID-19
by Khalaf M. Alanazi
Computation 2024, 12(2), 34; https://doi.org/10.3390/computation12020034 - 12 Feb 2024
Viewed by 1251
Abstract
We derive a reaction–diffusion model with time-delayed nonlocal effects to study an epidemic’s spatial spread numerically. The model describes infected individuals in the latent period using a structured model with diffusion. The epidemic model assumes that infectious individuals are subject to containment measures. [...] Read more.
We derive a reaction–diffusion model with time-delayed nonlocal effects to study an epidemic’s spatial spread numerically. The model describes infected individuals in the latent period using a structured model with diffusion. The epidemic model assumes that infectious individuals are subject to containment measures. To simulate the model in two-dimensional space, we use the continuous Runge–Kutta method of the fourth order and the discrete Runge–Kutta method of the third order with six stages. The numerical results admit the existence of traveling wave solutions for the proposed model. We use the COVID-19 epidemic to conduct numerical experiments and investigate the minimal speed of spread of the traveling wave front. The minimal spreading speeds of COVID-19 are found and discussed. Also, we assess the power of containment measures to contain the epidemic. The results depict a clear drop in the spreading speed of the traveling wave front after applying containment measures to at-risk populations. Full article
(This article belongs to the Special Issue Computational Approaches to Solving Differential Equations)
Show Figures

Figure 1

14 pages, 2761 KiB  
Article
A 16 × 16 Patch-Based Deep Learning Model for the Early Prognosis of Monkeypox from Skin Color Images
by Muhammad Asad Arshed, Hafiz Abdul Rehman, Saeed Ahmed, Christine Dewi and Henoch Juli Christanto
Computation 2024, 12(2), 33; https://doi.org/10.3390/computation12020033 - 10 Feb 2024
Cited by 1 | Viewed by 1406
Abstract
The DNA virus responsible for monkeypox, transmitted from animals to humans, exhibits two distinct genetic lineages in central and eastern Africa. Beyond the zoonotic transmission involving direct contact with the infected animals’ bodily fluids and blood, the spread of monkeypox can also occur [...] Read more.
The DNA virus responsible for monkeypox, transmitted from animals to humans, exhibits two distinct genetic lineages in central and eastern Africa. Beyond the zoonotic transmission involving direct contact with the infected animals’ bodily fluids and blood, the spread of monkeypox can also occur through skin lesions and respiratory secretions among humans. Both monkeypox and chickenpox involve skin lesions and can also be transmitted through respiratory secretions, but they are caused by different viruses. The key difference is that monkeypox is caused by an orthopox-virus, while chickenpox is caused by the varicella-zoster virus. In this study, the utilization of a patch-based vision transformer (ViT) model for the identification of monkeypox and chickenpox disease from human skin color images marks a significant advancement in medical diagnostics. Employing a transfer learning approach, the research investigates the ViT model’s capability to discern subtle patterns which are indicative of monkeypox and chickenpox. The dataset was enriched through carefully selected image augmentation techniques, enhancing the model’s ability to generalize across diverse scenarios. During the evaluation phase, the patch-based ViT model demonstrated substantial proficiency, achieving an accuracy, precision, recall, and F1 rating of 93%. This positive outcome underscores the practicality of employing sophisticated deep learning architectures, specifically vision transformers, in the realm of medical image analysis. Through the integration of transfer learning and image augmentation, not only is the model’s responsiveness to monkeypox- and chickenpox-related features enhanced, but concerns regarding data scarcity are also effectively addressed. The model outperformed the state-of-the-art studies and the CNN-based pre-trained models in terms of accuracy. Full article
(This article belongs to the Special Issue Deep Learning Applications in Medical Imaging)
Show Figures

Figure 1

17 pages, 709 KiB  
Article
Accelerating Multiple Sequence Alignments Using Parallel Computing
by Qanita Bani Baker, Ruba A. Al-Hussien and Mahmoud Al-Ayyoub
Computation 2024, 12(2), 32; https://doi.org/10.3390/computation12020032 - 9 Feb 2024
Viewed by 1525
Abstract
Multiple sequence alignment (MSA) stands as a critical tool for understanding the evolutionary and functional relationships among biological sequences. Obtaining an exact solution for MSA, termed exact-MSA, is a significant challenge due to the combinatorial nature of the problem. Using the dynamic [...] Read more.
Multiple sequence alignment (MSA) stands as a critical tool for understanding the evolutionary and functional relationships among biological sequences. Obtaining an exact solution for MSA, termed exact-MSA, is a significant challenge due to the combinatorial nature of the problem. Using the dynamic programming technique to solve MSA is recognized as a highly computationally complex algorithm. To cope with the computational demands of MSA, parallel computing offers the potential for significant speedup in MSA. In this study, we investigated the utilization of parallelization to solve the exact-MSA using three proposed novel approaches. In these approaches, we used multi-threading techniques to improve the performance of the dynamic programming algorithms in solving the exact-MSA. We developed and employed three parallel approaches, named diagonal traversing, blocking, and slicing, to improve MSA performance. The proposed method accelerated the exact-MSA algorithm by around 4×. The suggested approaches could be basic approaches to be combined with many existing techniques. These proposed approaches could serve as foundational elements, offering potential integration with existing techniques for comprehensive MSA enhancement. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

25 pages, 6445 KiB  
Article
The MLDAR Model: Machine Learning-Based Denoising of Structural Response Signals Generated by Ambient Vibration
by Spyros Damikoukas and Nikos D. Lagaros
Computation 2024, 12(2), 31; https://doi.org/10.3390/computation12020031 - 9 Feb 2024
Viewed by 1298
Abstract
Engineers have consistently prioritized the maintenance of structural serviceability and safety. Recent strides in design codes, computational tools, and Structural Health Monitoring (SHM) have sought to address these concerns. On the other hand, the burgeoning application of machine learning (ML) techniques across diverse [...] Read more.
Engineers have consistently prioritized the maintenance of structural serviceability and safety. Recent strides in design codes, computational tools, and Structural Health Monitoring (SHM) have sought to address these concerns. On the other hand, the burgeoning application of machine learning (ML) techniques across diverse domains has been noteworthy. This research proposes the combination of ML techniques with SHM to bridge the gap between high-cost and affordable measurement devices. A significant challenge associated with low-cost instruments lies in the heightened noise introduced into recorded data, particularly obscuring structural responses in ambient vibration (AV) measurements. Consequently, the obscured signal within the noise poses challenges for engineers in identifying the eigenfrequencies of structures. This article concentrates on eliminating additive noise, particularly electronic noise stemming from sensor circuitry and components, in AV measurements. The proposed MLDAR (Machine Learning-based Denoising of Ambient Response) model employs a neural network architecture, featuring a denoising autoencoder with convolutional and upsampling layers. The MLDAR model undergoes training using AV response signals from various Single-Degree-of-Freedom (SDOF) oscillators. These SDOFs span the 1–10 Hz frequency band, encompassing low, medium, and high eigenfrequencies, with their accuracy forming an integral part of the model’s evaluation. The results are promising, as AV measurements in an image format after being submitted to the trained model become free of additive noise. This with the aid of upscaling enables the possibility of deriving target eigenfrequencies without altering or deforming of them. Comparisons in various terms, both qualitative and quantitative, such as the mean magnitude-squared coherence, mean phase difference, and Signal-to-Noise Ratio (SNR), showed great performance. Full article
(This article belongs to the Special Issue Computational Methods in Structural Engineering)
Show Figures

Figure 1

19 pages, 866 KiB  
Article
Geometric Loci and ChatGPT: Caveat Emptor!
by Francisco Botana and Tomas Recio
Computation 2024, 12(2), 30; https://doi.org/10.3390/computation12020030 - 7 Feb 2024
Viewed by 1278
Abstract
We compare the performance of two systems, ChatGPT 3.5 and GeoGebra 5, in a restricted, but quite relevant, benchmark from the realm of classical geometry: the determination of geometric loci, focusing, in particular, on the computation of envelopes of families of plane curves. [...] Read more.
We compare the performance of two systems, ChatGPT 3.5 and GeoGebra 5, in a restricted, but quite relevant, benchmark from the realm of classical geometry: the determination of geometric loci, focusing, in particular, on the computation of envelopes of families of plane curves. In order to study the loci calculation abilities of ChatGPT, we begin by entering an informal description of a geometric construction involving a locus or an envelope and then we ask ChatGPT to compute its equation. The chatbot fails in most situations, showing that it is not mature enough to deal with the subject. Then, the same constructions are also approached through the automated reasoning tools implemented in the dynamic geometry program, GeoGebra Discovery, which successfully resolves most of them. Furthermore, although ChatGPT is able to write general computer code, it cannot currently output that of GeoGebra. Thus, we consider describing a simple method for ChatGPT to generate GeoGebra constructions. Finally, in case GeoGebra fails, or gives an incorrect solution, we refer to the need for improved computer algebra algorithms to solve the loci/envelope constructions. Other than exhibiting the current problematic performance of the involved programs in this geometric context, our comparison aims to show the relevance and benefits of analyzing the interaction between them. Full article
Show Figures

Figure 1

23 pages, 7621 KiB  
Article
Accurate Liquid Level Measurement with Minimal Error: A Chaotic Observer Approach
by Vighnesh Shenoy, Prathvi Shenoy and Santhosh Krishnan Venkata
Computation 2024, 12(2), 29; https://doi.org/10.3390/computation12020029 - 6 Feb 2024
Viewed by 1186
Abstract
This paper delves into precisely measuring liquid levels using a specific methodology with diverse real-world applications such as process optimization, quality control, fault detection and diagnosis, etc. It demonstrates the process of liquid level measurement by employing a chaotic observer, which senses multiple [...] Read more.
This paper delves into precisely measuring liquid levels using a specific methodology with diverse real-world applications such as process optimization, quality control, fault detection and diagnosis, etc. It demonstrates the process of liquid level measurement by employing a chaotic observer, which senses multiple variables within a system. A three-dimensional computational fluid dynamics (CFD) model is meticulously created using ANSYS to explore the laminar flow characteristics of liquids comprehensively. The methodology integrates the system identification technique to formulate a third-order state–space model that characterizes the system. Based on this mathematical model, we develop estimators inspired by Lorenz and Rossler’s principles to gauge the liquid level under specified liquid temperature, density, inlet velocity, and sensor placement conditions. The estimated results are compared with those of an artificial neural network (ANN) model. These ANN models learn and adapt to the patterns and features in data and catch non-linear relationships between input and output variables. The accuracy and error minimization of the developed model are confirmed through a thorough validation process. Experimental setups are employed to ensure the reliability and precision of the estimation results, thereby underscoring the robustness of our liquid-level measurement methodology. In summary, this study helps to estimate unmeasured states using the available measurements, which is essential for understanding and controlling the behavior of a system. It helps improve the performance and robustness of control systems, enhance fault detection capabilities, and contribute to dynamic systems’ overall efficiency and reliability. Full article
Show Figures

Figure 1

29 pages, 6358 KiB  
Article
Investigation of the Misinformation about COVID-19 on YouTube Using Topic Modeling, Sentiment Analysis, and Language Analysis
by Nirmalya Thakur, Shuqi Cui, Victoria Knieling, Karam Khanna and Mingchen Shao
Computation 2024, 12(2), 28; https://doi.org/10.3390/computation12020028 - 6 Feb 2024
Viewed by 1541
Abstract
The work presented in this paper makes multiple scientific contributions with a specific focus on the analysis of misinformation about COVID-19 on YouTube. First, the results of topic modeling performed on the video descriptions of YouTube videos containing misinformation about COVID-19 revealed four [...] Read more.
The work presented in this paper makes multiple scientific contributions with a specific focus on the analysis of misinformation about COVID-19 on YouTube. First, the results of topic modeling performed on the video descriptions of YouTube videos containing misinformation about COVID-19 revealed four distinct themes or focus areas—Promotion and Outreach Efforts, Treatment for COVID-19, Conspiracy Theories Regarding COVID-19, and COVID-19 and Politics. Second, the results of topic-specific sentiment analysis revealed the sentiment associated with each of these themes. For the videos belonging to the theme of Promotion and Outreach Efforts, 45.8% were neutral, 39.8% were positive, and 14.4% were negative. For the videos belonging to the theme of Treatment for COVID-19, 38.113% were positive, 31.343% were neutral, and 30.544% were negative. For the videos belonging to the theme of Conspiracy Theories Regarding COVID-19, 46.9% were positive, 31.0% were neutral, and 22.1% were negative. For the videos belonging to the theme of COVID-19 and Politics, 35.70% were positive, 32.86% were negative, and 31.44% were neutral. Third, topic-specific language analysis was performed to detect the various languages in which the video descriptions for each topic were published on YouTube. This analysis revealed multiple novel insights. For instance, for all the themes, English and Spanish were the most widely used and second most widely used languages, respectively. Fourth, the patterns of sharing these videos on other social media channels, such as Facebook and Twitter, were also investigated. The results revealed that videos containing video descriptions in English were shared the highest number of times on Facebook and Twitter. Finally, correlation analysis was performed by taking into account multiple characteristics of these videos. The results revealed that the correlation between the length of the video title and the number of tweets and the correlation between the length of the video title and the number of Facebook posts were statistically significant. Full article
Show Figures

Figure 1

21 pages, 8030 KiB  
Article
Numerical Modeling and Analysis of Transient and Three-Dimensional Heat Transfer in 3D Printing via Fused-Deposition Modeling (FDM)
by Büryan Apaçoğlu-Turan, Kadir Kırkköprü and Murat Çakan
Computation 2024, 12(2), 27; https://doi.org/10.3390/computation12020027 - 5 Feb 2024
Cited by 1 | Viewed by 1447
Abstract
Fused-Deposition Modeling (FDM) is a commonly used 3D printing method for rapid prototyping and the fabrication of plastic components. The history of temperature variation during the FDM process plays a crucial role in the degree of bonding between layers. This study presents research [...] Read more.
Fused-Deposition Modeling (FDM) is a commonly used 3D printing method for rapid prototyping and the fabrication of plastic components. The history of temperature variation during the FDM process plays a crucial role in the degree of bonding between layers. This study presents research on the thermal analysis of the 3D printing process using a developed simulation code. The code employs numerical discretization methods with an implicit scheme and an effective heat transfer coefficient for cooling. The computational model is validated by comparing the results with analytical solutions, demonstrating an agreement of more than 99%. The code is then utilized to perform thermal analyses for the 3D printing process. Interlayer and intralayer reheating effects, sensitivity to printing parameters, and realistic printing patterns are investigated. It is shown that concentric and zigzag paths yield similar peaks at different time intervals. Nodal temperatures can fall below the glass transition temperature (Tg) during the printing process, especially at the outer nodes of the domain and under conditions where the cooling period is longer and the printed volume per unit time is smaller. The article suggests future work to calculate welding time at different conditions and locations for the estimation of the degree of bonding. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

13 pages, 317 KiB  
Article
Mathematical Modeling of Cell Growth via Inverse Problem and Computational Approach
by Ivanna Andrusyak, Oksana Brodyak, Petro Pukach and Myroslava Vovk
Computation 2024, 12(2), 26; https://doi.org/10.3390/computation12020026 - 3 Feb 2024
Viewed by 1422
Abstract
A simple cell population growth model is proposed, where cells are assumed to have a physiological structure (e.g., a model describing cancer cell maturation, where cells are structured by maturation stage, size, or mass). The main question is whether we can guarantee, using [...] Read more.
A simple cell population growth model is proposed, where cells are assumed to have a physiological structure (e.g., a model describing cancer cell maturation, where cells are structured by maturation stage, size, or mass). The main question is whether we can guarantee, using the death rate as a control mechanism, that the total number of cells or the total cell biomass has prescribed dynamics, which may be applied to modeling the effect of chemotherapeutic agents on malignant cells. Such types of models are usually described by partial differential equations (PDE). The population dynamics are modeled by an inverse problem for PDE in our paper. The main idea is to reduce this model to a simplified integral equation that can be more easily studied by various analytical and numerical methods. Our results were obtained using the characteristics method. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Biology)
Show Figures

Figure 1

15 pages, 1266 KiB  
Article
Two Iterative Methods for Sizing Pipe Diameters in Gas Distribution Networks with Loops
by Dejan Brkić
Computation 2024, 12(2), 25; https://doi.org/10.3390/computation12020025 - 1 Feb 2024
Viewed by 1500
Abstract
Closed-loop pipe systems allow the possibility of the flow of gas from both directions across each route, ensuring supply continuity in the event of a failure at one point, but their main shortcoming is in the necessity to model them using iterative methods. [...] Read more.
Closed-loop pipe systems allow the possibility of the flow of gas from both directions across each route, ensuring supply continuity in the event of a failure at one point, but their main shortcoming is in the necessity to model them using iterative methods. Two iterative methods of determining the optimal pipe diameter in a gas distribution network with closed loops are described in this paper, offering the advantage of maintaining the gas velocity within specified technical limits, even during peak demand. They are based on the following: (1) a modified Hardy Cross method with the correction of the diameter in each iteration and (2) the node-loop method, which provides a new diameter directly in each iteration. The calculation of the optimal pipe diameter in such gas distribution networks relies on ensuring mass continuity at nodes, following the first Kirchhoff law, and concluding when the pressure drops in all the closed paths are algebraically balanced, adhering to the second Kirchhoff law for energy equilibrium. The presented optimisation is based on principles developed by Hardy Cross in the 1930s for the moment distribution analysis of statically indeterminate structures. The results are for steady-state conditions and for the highest possible estimated demand of gas, while the distributed gas is treated as a noncompressible fluid due to the relatively small drop in pressure in a typical network of pipes. There is no unique solution; instead, an infinite number of potential outcomes exist, alongside infinite combinations of pipe diameters for a given fixed flow pattern that can satisfy the first and second Kirchhoff laws in the given topology of the particular network at hand. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

17 pages, 5047 KiB  
Article
Data Augmentation for Regression Machine Learning Problems in High Dimensions
by Clara Guilhaumon, Nicolas Hascoët, Francisco Chinesta, Marc Lavarde and Fatima Daim
Computation 2024, 12(2), 24; https://doi.org/10.3390/computation12020024 - 1 Feb 2024
Viewed by 1347
Abstract
Machine learning approaches are currently used to understand or model complex physical systems. In general, a substantial number of samples must be collected to create a model with reliable results. However, collecting numerous data is often relatively time-consuming or expensive. Moreover, the problems [...] Read more.
Machine learning approaches are currently used to understand or model complex physical systems. In general, a substantial number of samples must be collected to create a model with reliable results. However, collecting numerous data is often relatively time-consuming or expensive. Moreover, the problems of industrial interest tend to be more and more complex, and depend on a high number of parameters. High-dimensional problems intrinsically involve the need for large amounts of data through the curse of dimensionality. That is why new approaches based on smart sampling techniques have been investigated to minimize the number of samples to be given to train the model, such as active learning methods. Here, we propose a technique based on a combination of the Fisher information matrix and sparse proper generalized decomposition that enables the definition of a new active learning informativeness criterion in high dimensions. We provide examples proving the performances of this technique on a theoretical 5D polynomial function and on an industrial crash simulation application. The results prove that the proposed strategy outperforms the usual ones. Full article
Show Figures

Figure 1

17 pages, 381 KiB  
Article
Exploring Controlled Passive Particle Motion Driven by Point Vortices on a Sphere
by Carlos Balsa, M. Victoria Otero-Espinar and Sílvio Gama
Computation 2024, 12(2), 23; https://doi.org/10.3390/computation12020023 - 31 Jan 2024
Viewed by 1212
Abstract
This work focuses on optimizing the displacement of a passive particle interacting with vortices located on the surface of a sphere. The goal is to minimize the energy expended during the displacement within a fixed time. The modeling of particle dynamics, whether in [...] Read more.
This work focuses on optimizing the displacement of a passive particle interacting with vortices located on the surface of a sphere. The goal is to minimize the energy expended during the displacement within a fixed time. The modeling of particle dynamics, whether in Cartesian or spherical coordinates, gives rise to alternative formulations of the identical problem. Thanks to these two versions of the same problem, we can assert that the algorithm, employed to transform the optimal control problem into an optimization problem, is effective, as evidenced by the obtained controls. The numerical resolution of these formulations through a direct approach consistently produces optimal solutions, regardless of the selected coordinate system. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
Show Figures

Figure 1

23 pages, 441 KiB  
Article
Maxwell’s True Current
by Robert S. Eisenberg
Computation 2024, 12(2), 22; https://doi.org/10.3390/computation12020022 - 31 Jan 2024
Viewed by 1257
Abstract
Maxwell defined a ‘true’ or ‘total’ current in a way not widely used today. He said that “… true electric current … is not the same thing as the current of conduction but that the time-variation of the electric displacement must be taken [...] Read more.
Maxwell defined a ‘true’ or ‘total’ current in a way not widely used today. He said that “… true electric current … is not the same thing as the current of conduction but that the time-variation of the electric displacement must be taken into account in estimating the total movement of electricity”. We show that the true or total current is a universal property of electrodynamics independent of the properties of matter. We use mathematics without the approximation of a dielectric constant. The resulting Maxwell current law is a generalization of the Kirchhoff law of current used in circuit analysis, that also includes the displacement current. The generalization is not a long-time low-frequency approximation in contrast to the traditional presentation of Kirchhoff’s law. Full article
(This article belongs to the Special Issue 10th Anniversary of Computation—Computational Engineering)
16 pages, 268 KiB  
Review
Unraveling Arrhythmias with Graph-Based Analysis: A Survey of the MIT-BIH Database
by Sadiq Alinsaif
Computation 2024, 12(2), 21; https://doi.org/10.3390/computation12020021 - 25 Jan 2024
Cited by 1 | Viewed by 1954
Abstract
Cardiac arrhythmias, characterized by deviations from the normal rhythmic contractions of the heart, pose a formidable diagnostic challenge. Early and accurate detection remains an integral component of effective diagnosis, informing critical decisions made by cardiologists. This review paper surveys diverse computational intelligence methodologies [...] Read more.
Cardiac arrhythmias, characterized by deviations from the normal rhythmic contractions of the heart, pose a formidable diagnostic challenge. Early and accurate detection remains an integral component of effective diagnosis, informing critical decisions made by cardiologists. This review paper surveys diverse computational intelligence methodologies employed for arrhythmia analysis within the context of the widely utilized MIT-BIH dataset. The paucity of adequately annotated medical datasets significantly impedes advancements in various healthcare domains. Publicly accessible resources such as the MIT-BIH Arrhythmia Database serve as invaluable tools for evaluating and refining computer-assisted diagnosis (CAD) techniques specifically targeted toward arrhythmia detection. However, even this established dataset grapples with the challenge of class imbalance, further complicating its effective analysis. This review explores the current research landscape surrounding the application of graph-based approaches for both anomaly detection and classification within the MIT-BIH database. By analyzing diverse methodologies and their respective accuracies, this investigation aims to empower researchers and practitioners in the field of ECG signal analysis. The ultimate objective is to refine and optimize CAD algorithms, ultimately culminating in improved patient care outcomes. Full article
(This article belongs to the Special Issue Graph Theory and Its Applications in Computing)
19 pages, 6917 KiB  
Article
Cooperation Dynamic through Individualistic Indirect Reciprocity Mechanism in a Multi-Dynamic Model
by Mario-Ignacio González-Silva and Ricardo-Armando González-Silva
Computation 2024, 12(2), 20; https://doi.org/10.3390/computation12020020 - 24 Jan 2024
Viewed by 1305
Abstract
This research proposes a new variant of Nowak and Sigmund’s indirect reciprocity model focused on agents’ individualism, which means that an agent strengthens its profile to the extent to which it makes a profit; this is using agent-based modeling. In addition, our model [...] Read more.
This research proposes a new variant of Nowak and Sigmund’s indirect reciprocity model focused on agents’ individualism, which means that an agent strengthens its profile to the extent to which it makes a profit; this is using agent-based modeling. In addition, our model includes environmentally related conditions such as visibility and cooperative demand and internal poses such as obstinacy. The simulation results show that cooperators appear in a more significant proportion with conditions of low reputation visibility and high cooperative demand. Still, severe defectors take advantage of this situation and exceed the cooperators’ ratio. Some events show a heterogeneous society only with conditions of high obstinacy and cooperative demand. In general, the simulations show diverse scenarios, including centralized, polarized, and mixed societies. Simulation results show no healthy cooperation in indirect reciprocity due to individualism. Full article
(This article belongs to the Special Issue Computational Social Science and Complex Systems)
Show Figures

Figure 1

22 pages, 7582 KiB  
Article
A Hybrid Approach to Improve the Video Anomaly Detection Performance of Pixel- and Frame-Based Techniques Using Machine Learning Algorithms
by Hayati Tutar, Ali Güneş, Metin Zontul and Zafer Aslan
Computation 2024, 12(2), 19; https://doi.org/10.3390/computation12020019 - 24 Jan 2024
Viewed by 1736
Abstract
With the rapid development in technology in recent years, the use of cameras and the production of video and image data have similarly increased. Therefore, there is a great need to develop and improve video surveillance techniques to their maximum extent, particularly in [...] Read more.
With the rapid development in technology in recent years, the use of cameras and the production of video and image data have similarly increased. Therefore, there is a great need to develop and improve video surveillance techniques to their maximum extent, particularly in terms of their speed, performance, and resource utilization. It is challenging to accurately detect anomalies and increase the performance by minimizing false positives, especially in crowded and dynamic areas. Therefore, this study proposes a hybrid video anomaly detection model combining multiple machine learning algorithms with pixel-based video anomaly detection (PBVAD) and frame-based video anomaly detection (FBVAD) models. In the PBVAD model, the motion influence map (MIM) algorithm based on spatio–temporal (ST) factors is used, while in the FBVAD model, the k-nearest neighbors (kNN) and support vector machine (SVM) machine learning algorithms are used in a hybrid manner. An important result of our study is the high-performance anomaly detection achieved using the proposed hybrid algorithms on the UCF-Crime data set, which contains 128 h of original real-world video data and has not been extensively studied before. The AUC performance metrics obtained using our FBVAD-kNN algorithm in experiments were averaged to 98.0%. Meanwhile, the success rates obtained using our PBVAD-MIM algorithm in the experiments were averaged to 80.7%. Our study contributes significantly to the prevention of possible harm by detecting anomalies in video data in a near real-time manner. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop