Next Issue
Volume 13, December-2
Previous Issue
Volume 13, November-2
 
 
applsci-logo

Journal Browser

Journal Browser

Appl. Sci., Volume 13, Issue 23 (December-1 2023) – 436 articles

Cover Story (view full-size image): CO2 trapping and methanation allow us to reduce greenhouse gas emissions and recycle CO2 into a sustainable fuel, provided renewable H2 is employed. Microwave (MW)-based reactors provide an efficient means to use electrical energy for upgrading chemicals, since MW can selectively heat up the load placed in the reactor and not the reactor itself. In this study, CO2 capture and methanation were investigated using solid adsorbents (ZrO2 and Fe3O4), microwave absorbers (SiC and Fe3O4) and Ru/SiO2 as CO2, the methanation catalyst. The sorption and catalyst beds were located in a domestic MW oven that was used to trigger CO2 desorption and methanation in the presence of H2. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
23 pages, 16504 KiB  
Article
Skin Imaging: A Digital Twin for Geometric Deviations on Manufactured Surfaces
Appl. Sci. 2023, 13(23), 12971; https://doi.org/10.3390/app132312971 - 04 Dec 2023
Viewed by 722
Abstract
Closed-loop manufacturing is crucial in Industry 4.0, since it provides an online detection–correction cycle to optimize the production line by using the live data provided from the product being manufactured. By integrating the inspection system and manufacturing processes, the production line achieves a [...] Read more.
Closed-loop manufacturing is crucial in Industry 4.0, since it provides an online detection–correction cycle to optimize the production line by using the live data provided from the product being manufactured. By integrating the inspection system and manufacturing processes, the production line achieves a new level of accuracy and savings on costs. This is far more crucial than only inspecting the finished product as an accepted or rejected part. Modeling the actual surface of the workpiece in production, including the manufacturing errors, enables the potential to process the provided live data and give feedback to production planning. Recently introduced “skin imaging” methodology can generate 2D images as a comprehensive digital twin for geometric deviations on any scanned 3D surface including analytical geometries and sculptured surfaces. Skin-Image has been addressed as a novel methodology for continuous representation of unorganized discrete 3D points, by which the geometric deviation on the surface is shown using image intensity. Skin-Image can be readily used in online surface inspection for automatic and precise 3D defect segmentation and characterization. It also facilitates search-guided sampling strategies. This paper presents the implementation of skin imaging for primary engineering surfaces. The results, supported by several industrial case studies, show high efficiency of skin imaging in providing models of the real manufactured surfaces. Full article
(This article belongs to the Special Issue Smart Manufacturing and Industry 4.0, 2nd Edition)
Show Figures

Graphical abstract

27 pages, 7352 KiB  
Article
Hydraulic Analysis of a Passive Wedge Wire Water Intake Screen for Ichthyofauna Protection
Appl. Sci. 2023, 13(23), 12970; https://doi.org/10.3390/app132312970 - 04 Dec 2023
Viewed by 636
Abstract
A passive wedge screen, thanks to its many functional and environmental advantages, has recently become a popular type of surface water intake for municipal and industrial purposes. The design solutions proposed in this paper for a passive wedge wire screen intake model and [...] Read more.
A passive wedge screen, thanks to its many functional and environmental advantages, has recently become a popular type of surface water intake for municipal and industrial purposes. The design solutions proposed in this paper for a passive wedge wire screen intake model and two different deflectors have been experimentally tested under conditions that can be considered as no-flow conditions at the hydraulic flume. There was only a slight flow associated with the operation of the screen, while there was almost no flow in the hydraulic channel itself, such that it would be considered a watercourse. A hydraulic analysis was carried out, including velocity distribution around the screen as well as the determination of head losses with or without deflectors installed inside the screen. Lower inlet and inflow velocities to the surface of the water intake reduce the risk of injury or death to small fish and fry as well as attracting pollutants understood as sediments, debris, and plant remains floating in the river. In order to achieve the lowest possible maximum inlet and inflow velocities at the highest possible intake capacity, it was necessary to equalize the approach velocity distributions. It was shown that by using the proposed deflectors, the approach velocity distributions were equalized and the maximum values of inflow and inlet velocities were reduced. A water intake screen with a deflector with an uneven porosity distribution equalized the approach velocities better than a deflector with equal openings, but the differences were small. Installing the wedge screen model reduced the maximum inlet velocity from exceeding 2 m/s to a value of 0.08 m/s, and after installing deflectors with equal and unequal openings to values of 0.06 m/s and 0.05 m/s, respectively. In addition to laboratory tests, the paper describes the numerical simulations performed in ANSYS Fluent software. The results of the simulations made it possible to obtain a broader study, as well as to compare the velocity values obtained at the measuring points during the laboratory tests. Full article
Show Figures

Figure 1

22 pages, 10869 KiB  
Article
Local Thickness Optimization of Functionally Graded Lattice Structures in Compression
Appl. Sci. 2023, 13(23), 12969; https://doi.org/10.3390/app132312969 - 04 Dec 2023
Viewed by 633
Abstract
This paper presents a new method for optimizing the thickness distribution of a functionally graded lattice structure. It links the thickness of discrete lattice regions via mathematical functions, reducing the required number of optimization variables while being applicable to highly nonlinear models and [...] Read more.
This paper presents a new method for optimizing the thickness distribution of a functionally graded lattice structure. It links the thickness of discrete lattice regions via mathematical functions, reducing the required number of optimization variables while being applicable to highly nonlinear models and arbitrary optimization goals. This study demonstrates the method’s functionality by altering the local thickness of a lattice structure in compression, optimizing the structure’s specific energy absorption at constant weight. The simulation results suggest significant improvement potential for the investigated Simple Cubic lattice, but less so for the Isotruss variant. The energy absorption levels of the physical test results closely agree with the simulations; however, great care must be taken to accurately capture material and geometry deviations stemming from the manufacturing process. The proposed method can be applied to other lattice structures or goals and could be useful in a wide range of applications where the optimization of lightweight and high-performance structures is required. Full article
(This article belongs to the Special Issue Structural Optimization Methods and Applications)
Show Figures

Figure 1

30 pages, 11810 KiB  
Article
A Modeling of Human Reliability Analysis on Dam Failure Caused by Extreme Weather
Appl. Sci. 2023, 13(23), 12968; https://doi.org/10.3390/app132312968 - 04 Dec 2023
Viewed by 641
Abstract
Human factors are introduced into the dam risk analysis method to improve the existing dam risk management theory. This study constructs the path of human factor failure in dam collapse, explores the failure pattern of each node, and obtains the performance shaping factors [...] Read more.
Human factors are introduced into the dam risk analysis method to improve the existing dam risk management theory. This study constructs the path of human factor failure in dam collapse, explores the failure pattern of each node, and obtains the performance shaping factors (PSFs) therein. The resulting model was combined with a Bayesian network, and sensitivity analysis was performed using entropy reduction. The study obtained a human factor failure pathway consisting of four components: monitoring and awareness, state diagnosis, plan formulation and operation execution. Additionally, a PSFs set contains five factors: operator, technology, organization, environment, and task. Operator factors in a BN (Bayesian network) are the most sensitive, while the deeper causes are failures in organizational and managerial factors. The results show that the model can depict the relationship between the factors, explicitly measure the failure probability quantitatively, and identify the causes of high impact for risk control. Governments should improve the significance of the human factor in the dam project, constantly strengthen the safety culture of the organization’s communications, and enhance the psychological quality and professional skills of management personnel through training. This study provides valuable guidelines for the human reliability analysis on dam failure, which has implications for the theoretical research and engineering practice of reservoir dam safety and management. Full article
(This article belongs to the Special Issue Structural Health Monitoring for Concrete Dam)
Show Figures

Figure 1

21 pages, 4753 KiB  
Article
Effect of Loading Frequency on the Fatigue Response of Adhesive Joints up to the VHCF Range
Appl. Sci. 2023, 13(23), 12967; https://doi.org/10.3390/app132312967 - 04 Dec 2023
Viewed by 536
Abstract
Modern structures are designed to withstand in-service loads over a broad frequency spectrum. Nonetheless, mechanical properties in numerical codes are assumed to be frequency-independent to simplify calculations or due to a lack of experimental data, and this approach could lead to overdesign or [...] Read more.
Modern structures are designed to withstand in-service loads over a broad frequency spectrum. Nonetheless, mechanical properties in numerical codes are assumed to be frequency-independent to simplify calculations or due to a lack of experimental data, and this approach could lead to overdesign or failures. This study aims to quantify the frequency effects in the fatigue applications of a bi-material adhesive joint through analytical, numerical, and experimental procedures. Analytical and finite element models allowed the specimen design, whereas the frequency effects were investigated through a conventional servo-hydraulic apparatus at 5, 25, and 50 Hz and with an ultrasonic fatigue testing machine at 20 kHz. Experimentally, the fatigue life increases with the applied test frequency. Run-out stress data at 109 cycles follow the same trend: at 25 Hz and 50 Hz, the run-out data were found at 10 MPa, increasing to 15 MPa at 20 kHz. The P–S–N curves showed that frequency effects have a minor impact on the experimental variability and that standard deviation values lie in the range of 0.3038–0.7691 between 5 Hz and 20 kHz. Finally, the trend of fatigue strengths at 2·106 cycles with the applied loading frequency for selected probability levels was estimated. Full article
(This article belongs to the Special Issue Advanced Diagnosis/Monitoring of Jointed Structures)
Show Figures

Figure 1

17 pages, 10426 KiB  
Article
Self-Improved Learning for Salient Object Detection
Appl. Sci. 2023, 13(23), 12966; https://doi.org/10.3390/app132312966 - 04 Dec 2023
Viewed by 495
Abstract
Salient Object Detection (SOD) aims at identifying the most visually distinctive objects in a scene. However, learning a mapping directly from a raw image to its corresponding saliency map is still challenging. First, the binary annotations of SOD impede the model from learning [...] Read more.
Salient Object Detection (SOD) aims at identifying the most visually distinctive objects in a scene. However, learning a mapping directly from a raw image to its corresponding saliency map is still challenging. First, the binary annotations of SOD impede the model from learning the mapping smoothly. Second, the annotator’s preference introduces noisy labeling in the SOD datasets. Motivated by these, we propose a novel learning framework which consists of the Self-Improvement Training (SIT) strategy and the Augmentation-based Consistent Learning (ACL) scheme. SIT aims at reducing the learning difficulty, which provides smooth labels and improves the SOD model in a momentum-updating manner. Meanwhile, ACL focuses on improving the robustness of models by regularizing the consistency between raw images and their corresponding augmented images. Extensive experiments on five challenging benchmark datasets demonstrate that the proposed framework can play a plug-and-play role in various existing state-of-the-art SOD methods and improve their performances on multiple benchmarks without any architecture modification. Full article
(This article belongs to the Special Issue Advances in Computer Vision and Semantic Segmentation)
Show Figures

Figure 1

26 pages, 5160 KiB  
Review
Technological Breakthroughs in Sport: Current Practice and Future Potential of Artificial Intelligence, Virtual Reality, Augmented Reality, and Modern Data Visualization in Performance Analysis
Appl. Sci. 2023, 13(23), 12965; https://doi.org/10.3390/app132312965 - 04 Dec 2023
Viewed by 2335
Abstract
We are currently witnessing an unprecedented era of digital transformation in sports, driven by the revolutions in Artificial Intelligence (AI), Virtual Reality (VR), Augmented Reality (AR), and Data Visualization (DV). These technologies hold the promise of redefining sports performance analysis, automating data collection, [...] Read more.
We are currently witnessing an unprecedented era of digital transformation in sports, driven by the revolutions in Artificial Intelligence (AI), Virtual Reality (VR), Augmented Reality (AR), and Data Visualization (DV). These technologies hold the promise of redefining sports performance analysis, automating data collection, creating immersive training environments, and enhancing decision-making processes. Traditionally, performance analysis in sports relied on manual data collection, subjective observations, and standard statistical models. These methods, while effective, had limitations in terms of time and subjectivity. However, recent advances in technology have ushered in a new era of objective and real-time performance analysis. AI has revolutionized sports analysis by streamlining data collection, processing vast datasets, and automating information synthesis. VR introduces highly realistic training environments, allowing athletes to train and refine their skills in controlled settings. AR overlays digital information onto the real sports environment, providing real-time feedback and facilitating tactical planning. DV techniques convert complex data into visual representations, improving the understanding of performance metrics. In this paper, we explore the potential of these emerging technologies to transform sports performance analysis, offering valuable resources to coaches and athletes. We aim to enhance athletes’ performance, optimize training strategies, and inform decision-making processes. Additionally, we identify challenges and propose solutions for integrating these technologies into current sports analysis practices. This narrative review provides a comprehensive analysis of the historical context and evolution of performance analysis in sports science, highlighting current methods’ merits and limitations. It delves into the transformative potential of AI, VR, AR, and DV, offering insights into how these tools can be integrated into a theoretical model. Full article
(This article belongs to the Special Issue Analytics in Sports Sciences: State of the Art and Future Directions)
Show Figures

Figure 1

27 pages, 1210 KiB  
Systematic Review
Plant-Derived Bioactive Compounds for Rhabdomyosarcoma Therapy In Vitro: A Systematic Review
Appl. Sci. 2023, 13(23), 12964; https://doi.org/10.3390/app132312964 - 04 Dec 2023
Viewed by 626
Abstract
Rhabdomyosarcoma (RMS), the most common soft tissue sarcoma in children, constitutes approximately 40% of all recorded soft tissue tumors and is associated with a poor prognosis, with survival rates of less than 20% at 3 years. The development of resistance to cytotoxic drugs [...] Read more.
Rhabdomyosarcoma (RMS), the most common soft tissue sarcoma in children, constitutes approximately 40% of all recorded soft tissue tumors and is associated with a poor prognosis, with survival rates of less than 20% at 3 years. The development of resistance to cytotoxic drugs is a primary contributor to therapeutic failure. Consequently, the exploration of new therapeutic strategies is of vital importance. The potential use of plant extracts and their bioactive compounds emerges as a complementary treatment for this type of cancer. This systematic review focuses on research related to plant extracts or isolated bioactive compounds exhibiting antitumor activity against RMS cells. Literature searches were conducted in PubMed, Scopus, Cochrane, and WOS. A total of 173 articles published to date were identified, although only 40 were finally included to meet the inclusion criteria. Furthermore, many of these compounds are readily available and have reduced cytotoxicity, showing an apoptosis-mediated mechanism of action to induce tumor cell death. Interestingly, their use combined with chemotherapy or loaded with nanoparticles achieves better results by reducing toxicity and/or facilitating entry into tumor cells. Future in vivo studies will be necessary to verify the utility of these natural compounds as a therapeutic tool for RMS. Full article
Show Figures

Figure 1

23 pages, 4962 KiB  
Article
Cause Analysis and Accident Classification of Road Traffic Accidents Based on Complex Networks
Appl. Sci. 2023, 13(23), 12963; https://doi.org/10.3390/app132312963 - 04 Dec 2023
Cited by 1 | Viewed by 838
Abstract
The number of motor vehicles on the road is constantly increasing, leading to a rise in the number of traffic accidents. Accurately identifying the factors contributing to these accidents is a crucial topic in the field of traffic accident research. Most current research [...] Read more.
The number of motor vehicles on the road is constantly increasing, leading to a rise in the number of traffic accidents. Accurately identifying the factors contributing to these accidents is a crucial topic in the field of traffic accident research. Most current research focuses on analyzing the causes of traffic accidents rather than investigating the underlying factors. This study creates a complex network for road traffic accident cause analysis using the topology method for complex networks. The network metrics are analyzed using the network parameters to obtain reduced dimensionality feature factors, and four machine learning techniques are applied to accurately classify the accidents’ severity based on the analysis results. The study divides real traffic accident data into three main categories based on the factors that influences them: time, environment, and traffic management. The results show that traffic management factors have the most significant impact on road accidents. The study also finds that Extreme Gradient Boosting (XGBoost) outperforms Logistic Regression (LR), Random Forest (RF) and Decision Tree (DT) in accurately categorizing the severity of traffic accidents. Full article
(This article belongs to the Special Issue Traffic Safety Measures and Assessment)
Show Figures

Figure 1

18 pages, 1153 KiB  
Article
FedRDS: Federated Learning on Non-IID Data via Regularization and Data Sharing
Appl. Sci. 2023, 13(23), 12962; https://doi.org/10.3390/app132312962 - 04 Dec 2023
Viewed by 790
Abstract
Federated learning (FL) is an emerging decentralized machine learning framework enabling private global model training by collaboratively leveraging local client data without transferring it centrally. Unlike traditional distributed optimization, FL trains the model at the local client and then aggregates it at the [...] Read more.
Federated learning (FL) is an emerging decentralized machine learning framework enabling private global model training by collaboratively leveraging local client data without transferring it centrally. Unlike traditional distributed optimization, FL trains the model at the local client and then aggregates it at the server. While this approach reduces communication costs, the local datasets of different clients are non-Independent and Identically Distributed (non-IID), which may make the local model inconsistent. The present study suggests a FL algorithm that leverages regularization and data sharing (FedRDS). The local loss function is adapted by introducing a regularization term in each round of training so that the local model will gradually move closer to the global model. However, when the client data distribution gap becomes large, adding regularization items will increase the degree of client drift. Based on this, we used a data-sharing method in which a portion of server data is taken out as a shared dataset during the initialization. We then evenly distributed these data to each client to mitigate the problem of client drift by reducing the difference in client data distribution. Analysis of experimental outcomes indicates that FedRDS surpasses some known FL methods in various image classification tasks, enhancing both communication efficacy and accuracy. Full article
Show Figures

Figure 1

20 pages, 5970 KiB  
Article
Estimation of Daily Actual Evapotranspiration of Tea Plantations Using Ensemble Machine Learning Algorithms and Six Available Scenarios of Meteorological Data
Appl. Sci. 2023, 13(23), 12961; https://doi.org/10.3390/app132312961 - 04 Dec 2023
Viewed by 600
Abstract
The tea plant (Camellia sinensis), as a major, global cash crop providing beverages, is facing major challenges from droughts and water shortages due to climate change. The accurate estimation of the actual evapotranspiration (ETa) of tea plants is essential [...] Read more.
The tea plant (Camellia sinensis), as a major, global cash crop providing beverages, is facing major challenges from droughts and water shortages due to climate change. The accurate estimation of the actual evapotranspiration (ETa) of tea plants is essential for improving the water management and crop health of tea plantations. However, an accurate quantification of tea plantations’ ETa is lacking due to the complex and non-linear process that is difficult to measure and estimate accurately. Ensemble learning (EL) is a promising potential algorithm for accurate evapotranspiration prediction, which solves this complexity through the new field of machine learning. In this study, we investigated the potential of three EL algorithms—random forest (RF), bagging, and adaptive boosting (Ad)—for predicting the daily ETa of tea plants, which were then compared with the commonly used k-nearest neighbor (KNN), support vector machine (SVM), and multilayer perceptron (MLP) algorithms, and the experimental model. We used 36 estimation models with six scenarios from available meteorological and evapotranspiration data collected from tea plantations over a period of 12 years (2010–2021). The results show that the combination of Rn (net radiation), Tmean (mean air temperature), and RH (relative humidity) achieved reasonable precision in assessing the daily ETa of tea plantations in the absence of climatic datasets. Compared with other advanced models, the RF model demonstrated superior performance (root mean square error (RMSE): 0.41–0.56 mm day−1, mean absolute error (MAE): 0.32–0.42 mm day−1, R2: 0.84–0.91) in predicting the daily ETa of tea plantations, except in Scenario 6, followed by the bagging, SVM, KNN, Ad, and MLP algorithms. In addition, the RF and bagging models exhibited the highest steadiness with low RMSE values increasing (−15.3~+18.5%) in the validation phase over the testing phase. Considering the high prediction accuracy and stability of the studied models, the RF and bagging models can be recommended for estimating the daily ETa estimation of tea plantations. The importance analysis from the studied models demonstrated that the Rn and Tmean are the most critical influential variables that affect the observed and predicted daily ETa dynamics of tea plantations. Full article
Show Figures

Figure 1

0 pages, 8921 KiB  
Article
Biocompatible Fe-Based Metal-Organic Frameworks as Diclofenac Sodium Delivery Systems for Migraine Treatment
Appl. Sci. 2023, 13(23), 12960; https://doi.org/10.3390/app132312960 - 04 Dec 2023
Viewed by 746
Abstract
Migraine is now the sixth most common disease in the world and affects approximately 15% of the population. Non-steroidal anti-inflammatory drugs, including ketoprofen, diclofenac sodium, and ibuprofen, are often used during migraine attacks. Unfortunately, their efficiency can be reduced due to poor water [...] Read more.
Migraine is now the sixth most common disease in the world and affects approximately 15% of the population. Non-steroidal anti-inflammatory drugs, including ketoprofen, diclofenac sodium, and ibuprofen, are often used during migraine attacks. Unfortunately, their efficiency can be reduced due to poor water solubility and low cellular uptake. This requires the design of appropriate porous carriers, which enable drugs to reach the target site, increase their dissolution and stability, and contribute to a time-dependent specific release mode. In this research, the potential of the MIL-88A metal-organic frameworks with divergent morphologies as diclofenac sodium delivery platforms was demonstrated. Materials were synthesized under different conditions (temperature: 70 and 120 °C; solvent: distilled water or N,N-Dimethylformamide) and characterized using X-ray diffraction, low-temperature nitrogen adsorption/desorption, thermogravimetric analysis, infrared spectroscopy, and scanning electron microscopy. They showed spherical, rod- or diamond-like morphologies influenced by preparation factors. Depending on physicochemical properties, the MIL-88A samples exhibited various sorption capacities toward diclofenac sodium (833–2021 mg/g). Drug adsorption onto the surface of MIL-88A materials primarily relied on the formation of hydrogen bonds, metal coordination, and electrostatic interactions. An in vitro drug release experiment performed at pH 6.8 revealed that diclofenac sodium diffused to phosphate buffer in a controlled manner. The MIL-88A carriers provide a high percentage release of drug in the range of 58–97% after 24 h. Full article
(This article belongs to the Special Issue Young Investigators in Advanced Drug Delivery)
Show Figures

Graphical abstract

12 pages, 2096 KiB  
Article
Rapid Induction of Astaxanthin in Haematococcus lacustris by Mild Electric Stimulation
Appl. Sci. 2023, 13(23), 12959; https://doi.org/10.3390/app132312959 - 04 Dec 2023
Viewed by 693
Abstract
Efficient induction of astaxanthin (AXT) biosynthesis remains a considerable challenge for the industrialization of the biorefinement of the microalga Haematococcus lacustris. In this study, we evaluated the technical feasibility of photosynthetic electrotreatment to enhance AXT accumulation in H. lacustris. The AXT [...] Read more.
Efficient induction of astaxanthin (AXT) biosynthesis remains a considerable challenge for the industrialization of the biorefinement of the microalga Haematococcus lacustris. In this study, we evaluated the technical feasibility of photosynthetic electrotreatment to enhance AXT accumulation in H. lacustris. The AXT content of H. lacustris electrotreated at an optimal current intensity (10 mA for 4 h) was 21.8% to 34.9% higher than that of the untreated control group, depending on the physiological state of the initial palmella cells. The contents of other carotenoids (i.e., canthaxanthin, zeaxanthin, and β-carotene) were also increased by this electrotreatment. However, when H. lacustris cells were exposed to more intense electric treatments, particularly at 20 and 30 mA, cell viability significantly decreased to 84.2% and 65.6%, respectively, with a concurrent reduction in the contents of both AXT and the three other carotenoids compared to those of the control group. The cumulative effect of electric stimulation is likely related to two opposing functions of reactive oxygen species, which facilitate AXT biosynthesis as signaling molecules while also causing cellular damage as oxidizing radicals. Collectively, our findings indicate that when adequately controlled, electric stimulation can be an effective and eco-friendly strategy for inducing targeted carotenoid pigments in photosynthetic microalgae. Full article
(This article belongs to the Section Applied Biosciences and Bioengineering)
Show Figures

Figure 1

15 pages, 2616 KiB  
Article
Speed Optimization in DEVS-Based Simulations: A Memoization Approach
Appl. Sci. 2023, 13(23), 12958; https://doi.org/10.3390/app132312958 - 04 Dec 2023
Viewed by 379
Abstract
The DEVS model, designed for general discrete event simulation, explores the event status and time advance of all DEVS atomic models deployed at the time of the simulation, and then performs the scheduled simulation step. Each simulation step is accompanied by a re-exploration [...] Read more.
The DEVS model, designed for general discrete event simulation, explores the event status and time advance of all DEVS atomic models deployed at the time of the simulation, and then performs the scheduled simulation step. Each simulation step is accompanied by a re-exploration the event status and time advance, which is needed for maintaining the casual order of the entire model. It is time consuming to simulate a large-scale DEVS model. In a similar vein, attempts to perform an HDL simulation in a DEVS space increase simulation costs by incurring repeated search costs for model transitions. In this study, we performed a statistical analysis of engine behavior to improve simulation speed and we proposed a DP-based memoization technique for the coupled model. Through our method, we can expect significant performance improvements that range statistically from 7.4 to 11.7 times. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

22 pages, 9003 KiB  
Article
Robust Watermarking Algorithm for Building Information Modeling Based on Element Perturbation and Invisible Characters
Appl. Sci. 2023, 13(23), 12957; https://doi.org/10.3390/app132312957 - 04 Dec 2023
Viewed by 894
Abstract
With the increasing ease of building information modeling data usage, digital watermarking technology has become increasingly crucial for BIM data copyright protection. In response to the problem that existing robust watermarking methods mainly focus on BIM exchange formats and cannot adapt to BIM [...] Read more.
With the increasing ease of building information modeling data usage, digital watermarking technology has become increasingly crucial for BIM data copyright protection. In response to the problem that existing robust watermarking methods mainly focus on BIM exchange formats and cannot adapt to BIM data, a novel watermarking algorithm specifically designed for BIM data, which combines element perturbation and invisible character embedding, is proposed. The proposed algorithm first calculates the centroid of the enclosing box to locate the elements, and establishes a synchronous relationship between the element coordinates and the watermarked bits using a mapping mechanism, by which the watermarking robustness is effectively enhanced. Taking into consideration both data availability and the need for watermark invisibility, the algorithm classifies the BIM elements based on their mobility, and perturbs the movable elements while embedding invisible characters within the attributes of the immovable elements. Then, the watermark information after dislocation is embedded into the data. We use building model and structural model BIM data to carry out the experiments, and the results demonstrate that the signal-to-noise ratio and peak signal-to-noise ratio before and after watermark embedding are both greater than 100 dB. In addition, the increased information redundancy accounts for less than 0.15% of the original data., which means watermark embedding has very little impact on the original data. Additionally, the NC coefficient of watermark extraction is higher than 0.85 when facing attacks such as translation, element addition, element deletion, and geometry–property separation. These findings indicate a high level of imperceptibility and robustness offered by the algorithm. In conclusion, the robust watermarking algorithm for BIM data fulfills the practical requirements and provides a feasible solution for protecting the copyright of BIM data. Full article
(This article belongs to the Special Issue Recent Advances in Multimedia Steganography and Watermarking)
Show Figures

Figure 1

21 pages, 2866 KiB  
Article
Sentiment Analysis of Students’ Feedback on E-Learning Using a Hybrid Fuzzy Model
Appl. Sci. 2023, 13(23), 12956; https://doi.org/10.3390/app132312956 - 04 Dec 2023
Viewed by 597
Abstract
It is crucial to analyze opinions about the significant shift in education systems around the world, because of the widespread use of e-learning, to gain insight into the state of education today. A particular focus should be placed on the feedback from students [...] Read more.
It is crucial to analyze opinions about the significant shift in education systems around the world, because of the widespread use of e-learning, to gain insight into the state of education today. A particular focus should be placed on the feedback from students regarding the profound changes they experience when using e-learning. In this paper, we propose a model that combines fuzzy logic with bidirectional long short-term memory (BiLSTM) for the sentiment analysis of students’ textual feedback on e-learning. We obtained this feedback from students’ tweets expressing their opinions about e-learning. There were some ambiguous characteristics in terms of the writing style and language used in the collected feedback. It was written informally and not in adherence to standardized Arabic language writing rules by using the Saudi dialects. The proposed model benefits from the capabilities of the deep neural network BiLSTM to learn and also from the ability of fuzzy logic to handle uncertainties. The proposed models were evaluated using the appropriate evaluation metrics: accuracy, F1-score, precision, and recall. The results showed the effectiveness of our proposed model and that it worked well for analyzing opinions obtained from Arabic texts written in Saudi dialects. The proposed model outperformed the compared models by obtaining an accuracy of 86% and an F1-score of 85%. Full article
(This article belongs to the Special Issue Artificial Intelligence in Complex Networks (2nd Edition))
Show Figures

Figure 1

16 pages, 12858 KiB  
Article
Improving the Maritime Traffic Evaluation with the Course and Speed Model
Appl. Sci. 2023, 13(23), 12955; https://doi.org/10.3390/app132312955 - 04 Dec 2023
Viewed by 596
Abstract
Recent projections from marine transportation experts highlight an uptick in maritime traffic, attributed to the fourth industrial revolution’s technological strides and global economic rebound. This trend underscores the need for enhanced systems for maritime accident prediction and traffic management. In this study, to [...] Read more.
Recent projections from marine transportation experts highlight an uptick in maritime traffic, attributed to the fourth industrial revolution’s technological strides and global economic rebound. This trend underscores the need for enhanced systems for maritime accident prediction and traffic management. In this study, to analyze the flow of maritime traffic macroscopically, spatiality and continuity reflecting the output of ships are considered. The course–speed (CS) model used in this study involved analyzing COG, ROT, speed, and acceleration, which can be obtained from the ship’s AIS data, and calculating the deviation from the standard plan. In addition, spatiality and continuity were quantitatively analyzed to evaluate the smoothness of maritime traffic flow. A notable finding is that, in the target sea area, the outbound and inbound CS indices are measured at 0.7613 and 0.7501, suggesting that the outbound ship flows are more affected than inbound ship flows to the liquidity of maritime traffic flow. Using the CS model, a detailed quantitative evaluation of the spatiality and continuity of maritime traffic is presented. This approach facilitates robust comparisons over diverse scales and periods. Moreover, the research advances our understanding of factors dictating maritime traffic flow based on ship attributes. The study insights can catalyze the development of a novel index for maritime traffic management, enhancing safety and efficiency. Full article
(This article belongs to the Section Marine Science and Engineering)
Show Figures

Figure 1

21 pages, 4440 KiB  
Article
Park Inclusive Design Index as a Systematic Evaluation Framework to Improve Inclusive Urban Park Uses: The Case of Hangzhou Urban Parks
Appl. Sci. 2023, 13(23), 12954; https://doi.org/10.3390/app132312954 - 04 Dec 2023
Cited by 1 | Viewed by 622
Abstract
This study aims to optimize the evaluation system of inclusive design in urban parks, emphasizing the systemic nature of sensory, cognitive, and motor capacity support and exploring its role in park design practice. Based on the capability demand model, this study constructed indicators [...] Read more.
This study aims to optimize the evaluation system of inclusive design in urban parks, emphasizing the systemic nature of sensory, cognitive, and motor capacity support and exploring its role in park design practice. Based on the capability demand model, this study constructed indicators through literature collation and focus group discussion and assigned weights through hierarchical analysis to finally construct the Park Inclusive Design Index (PIDI). Then, the PIDI was utilized to assess the inclusive design performance of 48 urban parks in Hangzhou, China. The results of this study show that the overall inclusive design level of parks is relatively low (the average PIDI < 70), especially in the provision of cognitive support (cognitive-related indicator < 4). Meanwhile, comprehensive and specialized parks performed better in inclusive design compared to community parks and leisure parks. The level of inclusive design is moderatory correlated with the park renovation time and the park area, and strongly correlated with geographic location (scenic spot parks perform better; the parks in the old city perform worse). Ten indicators in the assessment scored below 2, which reveals the current status, shortcomings, and general problems with inclusive facilities in Hangzhou’s urban parks. This study integrated the needs and ability differences of people into the indicators, providing an assessment framework with broad applicability. Inclusive performance is a long-term process, and the implementation of the evaluation framework will provide a reference guide for the design, construction, operation, and maintenance of urban parks across China and even around the world. Full article
Show Figures

Figure 1

27 pages, 10309 KiB  
Article
Enhancing Neonatal Incubator Energy Management and Monitoring through IoT-Enabled CNN-LSTM Combination Predictive Model
Appl. Sci. 2023, 13(23), 12953; https://doi.org/10.3390/app132312953 - 04 Dec 2023
Viewed by 716
Abstract
This research focuses on enhancing neonatal care by developing a comprehensive monitoring and control system and an efficient model for predicting electrical energy consumption in incubators, aiming to mitigate potential adverse effects caused by excessive energy usage. Employing a combination of 1-dimensional convolutional [...] Read more.
This research focuses on enhancing neonatal care by developing a comprehensive monitoring and control system and an efficient model for predicting electrical energy consumption in incubators, aiming to mitigate potential adverse effects caused by excessive energy usage. Employing a combination of 1-dimensional convolutional neural network (1D-CNN) and long short-term memory (LSTM) methods within the framework of the Internet of Things (IoT), the study encompasses multiple components, including hardware, network, database, data analysis, and software. The research outcomes encompass a real-time web application for monitoring and control, temperature distribution visualizations within the incubator, a prototype incubator, and a predictive energy consumption model. Testing the LSTM method resulted in an RMSE of 42.650 and an MAE of 33.575, while the CNN method exhibited an RMSE of 37.675 and an MAE of 30.082. Combining CNN and LSTM yielded an RMSE of 32.436 and an MAE of 25.382, demonstrating the potential for significantly improving neonatal care. Full article
(This article belongs to the Special Issue Future Internet of Things: Applications, Protocols and Challenges)
Show Figures

Figure 1

17 pages, 5992 KiB  
Article
Activity Concentrations of Cs-137, Sr-90, Am-241, Pu-238, and Pu-239+240 and an Assessment of Pollution Sources Based on Isotopic Ratio Calculations and the HYSPLIT Model in Tundra Landscapes (Subarctic Zone of Russia)
Appl. Sci. 2023, 13(23), 12952; https://doi.org/10.3390/app132312952 - 04 Dec 2023
Viewed by 467
Abstract
The paper is devoted to the assessment of the content of anthropogenic radionuclides in tundra landscapes of the subarctic zone of Russia. The authors of the article studied the features of accumulation and migration of anthropogenic radionuclides and identified probable sources of their [...] Read more.
The paper is devoted to the assessment of the content of anthropogenic radionuclides in tundra landscapes of the subarctic zone of Russia. The authors of the article studied the features of accumulation and migration of anthropogenic radionuclides and identified probable sources of their entry into environmental objects. Peat samples were collected on the territory of the Kaninskaya Tundra of the Nenets Autonomous Okrug (Northwest Russia). A total of 46 samples were taken. The following parameters were determined in each peat sample: (1) activity and pollution density of anthropogenic radionuclides; (2) isotopic ratios of anthropogenic radionuclides; (3) activity ratios of each radionuclide for layers 10–20 cm and 0–10 cm. The results of the studies showed that the pollution density of the Nes River basin with the radionuclides Cs-137 and Sr-90 is up to 4.85 × 103 Bq×m−2 and 1.88 × 103 Bq×m−2, respectively, which is 2–5 times higher than the available data for the Kanin tundra, as well as for Russia and the world as a whole. The data obtained for Am-241, Pu-238, and Pu-239+240 showed insignificant activity of these radionuclides and generally correspond to the values for other tundra areas in Russia and the world. It was found that some tundra areas (“peat lowlands”) are characterized by increased radionuclide content due to the process of accumulation and migration along the vertical profile. Calculations of isotope ratios Sr-90/Cs-137, Pu-238/Pu-239+240, Pu-239+240/Cs-137, Am-241/Pu-239+240 and air mass trajectories based on the HYSPLIT model showed that the main sources of anthropogenic radionuclide contamination are global atmospheric fallout and the Chernobyl accident. Full article
(This article belongs to the Section Environmental Sciences)
Show Figures

Figure 1

9 pages, 3080 KiB  
Article
A Bilateral Craniectomy Technique for In Vivo Photoacoustic Brain Imaging
Appl. Sci. 2023, 13(23), 12951; https://doi.org/10.3390/app132312951 - 04 Dec 2023
Viewed by 565
Abstract
Due to the high possibility of mechanical damage to the underlying tissues attached to the rat skull during a craniectomy, previously described methods for visualization of the rat brain in vivo are limited to unilateral craniotomies and small cranial windows, often measuring 4–5 [...] Read more.
Due to the high possibility of mechanical damage to the underlying tissues attached to the rat skull during a craniectomy, previously described methods for visualization of the rat brain in vivo are limited to unilateral craniotomies and small cranial windows, often measuring 4–5 mm. Here, we introduce a novel method for producing bilateral craniectomies that encompass frontal, parietal, and temporal bones via sequential thinning of the skull while preserving the dura. This procedure requires the removal of a portion of the temporalis muscle bilaterally, which adds an additional 2–3 mm exposure within the cranial opening. Therefore, while this surgery can be performed in vivo, it is strictly non-survival. By creating large, bilateral craniectomies, this methodology carries several key advantages, such as the opportunity afforded to test innovate imaging modalities that require a larger field of view and also the use of the contralateral hemisphere as a control for neurophysiological studies. Full article
(This article belongs to the Section Applied Physics General)
Show Figures

Figure 1

13 pages, 2104 KiB  
Article
Joint Channel and Power Assignment for Underwater Cognitive Acoustic Networks on Marine Mammal-Friendly
Appl. Sci. 2023, 13(23), 12950; https://doi.org/10.3390/app132312950 - 04 Dec 2023
Viewed by 454
Abstract
When marine animals and underwater acoustic sensor networks (UASNs) share spectrum resources, problems such as serious harm caused to marine animals by underwater acoustic systems and scarcity of underwater spectrum resources are encountered. To address these issues, a mammal-friendly underwater acoustic sensor network [...] Read more.
When marine animals and underwater acoustic sensor networks (UASNs) share spectrum resources, problems such as serious harm caused to marine animals by underwater acoustic systems and scarcity of underwater spectrum resources are encountered. To address these issues, a mammal-friendly underwater acoustic sensor network channel power allocation algorithm is proposed. Firstly, marine animals are treated as authorized users and sensor nodes as unauthorized users. Considering the interference level of sensor nodes on authorized users, this approach improves network service quality and achieves a mammal-friendly underwater communication mechanism. Secondly, to maximize the utility of unauthorized users, the algorithm incorporates a network interference level and node remaining energy into a game-theoretical framework. Using channel allocation and power control, a game model is constructed with a unique Nash equilibrium point. Finally, through simulation, it can be found that the proposed algorithm can obtain a stable optimal power value, and with the increase of network load, the system capacity of the proposed algorithm is significantly improved than that of the traditional cognitive radio technology and the common spectrum allocation algorithm, and the transmitted power of nodes can be controlled according to the size of the residual energy, so as to comprehensively improve the overall performance of the network. Full article
Show Figures

Figure 1

21 pages, 17339 KiB  
Article
Design and Dynamic Simulation Verification of an On-Orbit Operation-Based Modular Space Robot
Appl. Sci. 2023, 13(23), 12949; https://doi.org/10.3390/app132312949 - 04 Dec 2023
Viewed by 459
Abstract
Space robots have been playing an important role in space on-orbit operation missions. However, the traditional configuration of space robots only has a single function and cannot meet the requirements of different space missions, and the launch cost of space robots is very [...] Read more.
Space robots have been playing an important role in space on-orbit operation missions. However, the traditional configuration of space robots only has a single function and cannot meet the requirements of different space missions, and the launch cost of space robots is very high. Thus, the reconfigurable modular space robot system that can carry multiple loads and own mission adaptability is of great significance. Based on the analysis of a robot space mission, combined with the existing reconfigurable robots, this paper develops a configuration design scheme for a modular reconfigurable space robot, and carries out the prototype design. According to the configuration characteristics of the module, the dynamic modeling of the space robot is based on the graph theory analysis and principle of virtual work. Related application scenarios are set up. Function and feasibility of the dynamic modeling methods are verified through assembly experimentation and dynamic simulation. Full article
(This article belongs to the Special Issue Advanced Guidance and Control of Hypersonic Vehicles)
Show Figures

Figure 1

12 pages, 5001 KiB  
Article
Computational Imaging at the Infrared Beamline of the Australian Synchrotron Using the Lucy–Richardson–Rosen Algorithm
Appl. Sci. 2023, 13(23), 12948; https://doi.org/10.3390/app132312948 - 04 Dec 2023
Viewed by 505
Abstract
The Fourier transform infrared microspectroscopy (FTIRm) system of the Australian Synchrotron has a unique optical configuration with a peculiar beam profile consisting of two parallel lines. The beam is tightly focused using a 36× Schwarzschild objective to a point on the sample and [...] Read more.
The Fourier transform infrared microspectroscopy (FTIRm) system of the Australian Synchrotron has a unique optical configuration with a peculiar beam profile consisting of two parallel lines. The beam is tightly focused using a 36× Schwarzschild objective to a point on the sample and the sample is scanned pixel by pixel to record an image of a single plane using a single pixel mercury cadmium telluride detector. A computational stitching procedure is used to obtain a 2D image of the sample. However, if the imaging condition is not satisfied, then the recorded object’s information is distorted. Unlike commonly observed blurring, the case with a Schwarzschild objective is unique, with a donut like intensity distribution with three distinct lobes. Consequently, commonly used deblurring methods are not efficient for image reconstruction. In this study, we have applied a recently developed computational reconstruction method called the Lucy–Richardson–Rosen algorithm (LRRA) in the online FTIRm system for the first time. The method involves two steps: training step and imaging step. In the training step, the point spread function (PSF) library is recorded by temporal summation of intensity patterns obtained by scanning the pinhole in the x-y directions across the path of the beam using the single pixel detector along the z direction. In the imaging step, the process is repeated for a complicated object along only a single plane. This new technique is named coded aperture scanning holography. Different types of samples, such as two pinholes; a number 3 USAF object; a cross shaped object on a barium fluoride substrate; and a silk sample are used for the demonstration of both image recovery and 3D imaging applications. Full article
(This article belongs to the Collection Optical Design and Engineering)
Show Figures

Figure 1

15 pages, 1678 KiB  
Article
Controlling Thermal Radiation in Photonic Quasicrystals Containing Epsilon-Negative Metamaterials
Appl. Sci. 2023, 13(23), 12947; https://doi.org/10.3390/app132312947 - 04 Dec 2023
Viewed by 500
Abstract
The transfer matrix approach is used to study the optical characteristics of thermal radiation in a one-dimensional photonic crystal (1DPC) with metamaterial. In this method, every layer within the multilayer structure is associated with its specific transfer matrix. Subsequently, it links the incident [...] Read more.
The transfer matrix approach is used to study the optical characteristics of thermal radiation in a one-dimensional photonic crystal (1DPC) with metamaterial. In this method, every layer within the multilayer structure is associated with its specific transfer matrix. Subsequently, it links the incident beam to the next layer from the previous layer. The proposed structure is composed of three types of materials, namely InSb, ZrO2, and Teflon, and one type of epsilon-negative (ENG) metamaterial and is organized in accordance with the laws of sequencing. The semiconductor InSb has the capability to adjust bandgaps by utilizing its thermally responsive permittivity, allowing for tunability with temperature changes, while the metamaterial modifies the bandgaps according to its negative permittivity. Using quasi-periodic shows that, in contrast to employing absolute periodic arrangements, it produces more diverse results in modifying the structure’s band-gaps. Using a new sequence arrangement mixed-quasi-periodic (MQP) structure, which is a combination of two quasi periodic structures, provides more freedom of action for modifying the properties of the medium than periodic arrangements do. The ability to control thermal radiation is crucial in a range of optical applications since it is frequently unpolarized and incoherent in both space and time. These configurations allow for the suppression and emission of thermal radiation in a certain frequency range due to their fundamental nature as photonic band-gaps (PBGs). So, we are able to control the thermal radiation by changing the structure arrangement. Here, the We use an indirect method based on the second Kirchoff law for thermal radiation to investigate the emittance of black bodies based on a well-known transfer matrix technique. We can measure the transmission and reflection coefficients with associated transmittance and reflectance, T and R, respectively. Here, the effects of several parameters, including the input beam’s angle, polarization, and period on tailoring the thermal radiation spectrum of the proposed structure, are studied. The results show that in some frequency bands, thermal radiation exceeded the black body limit. There were also good results in terms of complete stop bands for both TE and TM polarization at different incident angles and frequencies. This study produces encouraging results for the creation of Terahertz (THz) filters and selective thermal emitters. The tunability of our media is a crucial factor that influences the efficiency and function of our desired photonic outcome. Therefore, exploiting MQP sequences or arrangements is a promising strategy, as it allows us to rearrange our media more flexibly than quasi-periodic sequences and thus achieve our optimal result. Full article
(This article belongs to the Section Applied Physics General)
Show Figures

Figure 1

22 pages, 5551 KiB  
Article
A Multivariate Time Series Analysis of Electrical Load Forecasting Based on a Hybrid Feature Selection Approach and Explainable Deep Learning
Appl. Sci. 2023, 13(23), 12946; https://doi.org/10.3390/app132312946 - 04 Dec 2023
Cited by 1 | Viewed by 645
Abstract
In the smart grid paradigm, precise electrical load forecasting (ELF) offers significant advantages for enhancing grid reliability and informing energy planning decisions. Specifically, mid-term ELF is a key priority for power system planning and operation. Although statistical methods were primarily used because ELF [...] Read more.
In the smart grid paradigm, precise electrical load forecasting (ELF) offers significant advantages for enhancing grid reliability and informing energy planning decisions. Specifically, mid-term ELF is a key priority for power system planning and operation. Although statistical methods were primarily used because ELF is a time series problem, deep learning (DL)-based forecasting approaches are more commonly employed and successful in achieving precise predictions. However, these DL-based techniques, known as black box models, lack interpretability. When interpreting the DL model, employing explainable artificial intelligence (XAI) yields significant advantages by extracting meaningful information from the DL model outputs and the causal relationships among various factors. On the contrary, precise load forecasting necessitates employing feature engineering to identify pertinent input features and determine optimal time lags. This research study strives to accomplish a mid-term forecast of ELF study load utilizing aggregated electrical load consumption data, while considering the aforementioned critical aspects. A hybrid framework for feature selection and extraction is proposed for electric load forecasting. Technical term abbreviations are explained upon first use. The feature selection phase employs a combination of filter, Pearson correlation (PC), embedded random forest regressor (RFR) and decision tree regressor (DTR) methods to determine the correlation and significance of each feature. In the feature extraction phase, we utilized a wrapper-based technique called recursive feature elimination cross-validation (RFECV) to eliminate redundant features. Multi-step-ahead time series forecasting is conducted utilizing three distinct long-short term memory (LSTM) models: basic LSTM, bi-directional LSTM (Bi-LSTM) and attention-based LSTM models to accurately predict electrical load consumption thirty days in advance. Through numerous studies, a reduction in forecasting errors of nearly 50% has been attained. Additionally, the local interpretable model-agnostic explanations (LIME) methodology, which is an explainable artificial intelligence (XAI) technique, is utilized for explaining the mid-term ELF model. As far as the authors are aware, XAI has not yet been implemented in mid-term aggregated energy forecasting studies utilizing the ELF method. Quantitative and detailed evaluations have been conducted, with the experimental results indicating that this comprehensive approach is entirely successful in forecasting multivariate mid-term loads. Full article
Show Figures

Figure 1

31 pages, 16043 KiB  
Article
A Java Application for Teaching Graphs in Undergraduate Courses
Appl. Sci. 2023, 13(23), 12945; https://doi.org/10.3390/app132312945 - 04 Dec 2023
Viewed by 764
Abstract
Graph theory is a common topic that is introduced as part of the curricula of computing courses such as Computer Science, Computer Engineering, Data Science, Information Technology and Software Engineering. Understanding graphs is fundamental for solving many real-world problems, such as network routing, [...] Read more.
Graph theory is a common topic that is introduced as part of the curricula of computing courses such as Computer Science, Computer Engineering, Data Science, Information Technology and Software Engineering. Understanding graphs is fundamental for solving many real-world problems, such as network routing, social network analysis, and circuit design; however, many students struggle to grasp the concepts of graph theory, as they often have difficulties in visualising and manipulating graphs. To overcome these difficulties, educational software can be used to aid in the teaching and learning of graph theory. This work focuses on the development of a Java system for graph visualisation and computation, called MaGraDa (Graphs for Discrete Mathematics), that can help both students and teachers of undergraduate or high school courses that include concepts and algorithms related to graphs. A survey on the use of this tool was conducted to explore the satisfaction level of students on a Discrete Mathematics course taken as part of a Computer Engineering degree at the University of Alicante (Spain). An analysis of the results showed that this educational software had the potential to enhance students’ understanding of graph theory and could enable them to apply these concepts to solve practical problems in the field of computer science. In addition, it was shown to facilitate self-learning and to have a significant impact on their academic performance. Full article
(This article belongs to the Special Issue ICT in Education)
Show Figures

Figure 1

19 pages, 5691 KiB  
Article
Cuckoo Coupled Improved Grey Wolf Algorithm for PID Parameter Tuning
Appl. Sci. 2023, 13(23), 12944; https://doi.org/10.3390/app132312944 - 04 Dec 2023
Cited by 1 | Viewed by 457
Abstract
In today’s automation control systems, the PID controller, as a core technology, is widely used to maintain the system output near the set value. However, in some complex control environments, such as the application of ball screw-driven rotating motors, traditional PID parameter adjustment [...] Read more.
In today’s automation control systems, the PID controller, as a core technology, is widely used to maintain the system output near the set value. However, in some complex control environments, such as the application of ball screw-driven rotating motors, traditional PID parameter adjustment methods may not meet the requirements of high precision, high performance, and fast response time of the system, making it difficult to ensure the stability and production efficiency of the mechanical system. Therefore, this paper proposes a cuckoo search optimisation coupled with an improved grey wolf optimisation (CSO_IGWO) algorithm to tune PID controller parameters, aiming at resolving the problems of the traditional grey wolf optimisation (GWO) algorithm, such as slow optimisation speed, weak exploitation ability, and ease of falling into a locally optimal solution. First, the tent chaotic mapping method is used to initialise the population instead of using random initialization to enrich the diversity of individuals in the population. Second, the value of the control parameter is adjusted by the nonlinear decline method to balance the exploration and development capacity of the population. Finally, inspired by the cuckoo search optimisation (CSO) algorithm, the Levy flight strategy is introduced to update the position equation so that grey wolf individuals are enabled to make a big jump to expand the search area and not easily fall into local optimisation. To verify the effectiveness of the algorithm, this study first verifies the superiority of the improved algorithm with eight benchmark test functions. Then, comparing this method with the other two improved grey wolf algorithms, it can be seen that this method increases the average and standard deviation by an order of magnitude and effectively improves the global optimal search ability and convergence speed. Finally, in the experimental section, three parameter tuning methods were compared from four aspects: overshoot, steady-state time, rise time, and steady-state error, using the ball screw motor as the control object. In terms of overall dynamic performance, the method proposed in this article is superior to the other three parameter tuning methods. Full article
Show Figures

Figure 1

35 pages, 15073 KiB  
Article
Radio-Frequency Identification Traceability System Implementation in the Packaging Section of an Industrial Company
Appl. Sci. 2023, 13(23), 12943; https://doi.org/10.3390/app132312943 - 04 Dec 2023
Viewed by 1078
Abstract
In recent years, radio-frequency identification (RFID) has aroused significant interest from industry and academia. This demand comes from the technology’s evolution, marked by a reduction in size, cost, and enhanced efficiency, making it increasingly accessible for diverse applications. This manuscript presents a case [...] Read more.
In recent years, radio-frequency identification (RFID) has aroused significant interest from industry and academia. This demand comes from the technology’s evolution, marked by a reduction in size, cost, and enhanced efficiency, making it increasingly accessible for diverse applications. This manuscript presents a case study of the implementation of an RFID traceability system in the packaging section of an industrial company that produces test equipment for the automotive wiring industries. The study presents the proposal and execution of a prototype asset-tracking system utilising RFID technology, designed to be adaptable and beneficial for various industrial settings. The experiments were carried out within the company’s shop-floor environment, alongside the existing barcode system, with the primary objective of evaluating and comparing the proposed solution. The test results demonstrate a significant enhancement in production efficiency, with substantial optimization achieved. The time required for asset identification and tracking was significantly reduced, resulting in an average time of approximately 43.62 s and an approximate 3.627% improvement in the time required to read the test sample of assets when compared to the barcode system. This successful implementation highlights the potential of RFID technology in improving operations, reducing working time, and enhancing traceability within industrial production processes. Full article
Show Figures

Figure 1

15 pages, 1138 KiB  
Review
Radiomic Analysis for Human Papillomavirus Assessment in Oropharyngeal Carcinoma: Lessons and Pitfalls for the Next Future
Appl. Sci. 2023, 13(23), 12942; https://doi.org/10.3390/app132312942 - 04 Dec 2023
Viewed by 551
Abstract
Background: Oropharyngeal Squamous Cell Carcinoma (OPSCC) is rapidly increasing due to the spread of Human Papillomavirus (HPV) infection. HPV-positive disease has unique characteristics, with better response to treatment and consequent better prognosis. HPV status is routinely assessed via p16 immunohistochemistry or HPV [...] Read more.
Background: Oropharyngeal Squamous Cell Carcinoma (OPSCC) is rapidly increasing due to the spread of Human Papillomavirus (HPV) infection. HPV-positive disease has unique characteristics, with better response to treatment and consequent better prognosis. HPV status is routinely assessed via p16 immunohistochemistry or HPV DNA Polymerase Chain Reaction. Radiomics is a quantitative approach to medical imaging which can overcome limitations due to its subjective interpretation and correlation with clinical data. The aim of this narrative review is to evaluate the impact of radiomic features on assessing HPV status in OPSCC patients. Methods: A narrative review was performed by synthesizing literature results from PUBMED. In the search strategy, Medical Subject Headings (MeSH) terms were used. Retrospective mono- or multicentric works assessing the correlation between radiomic features and HPV status prediction in OPSCC were included. Selected papers were in English and included studies on humans. The range of publication date was July 2015–April 2023. Results: Our research returned 23 published papers; the accuracy of radiomic models was evaluated by ROC curves and AUC values. MRI- and CT-based radiomic models proved of comparable efficacy. Also, metabolic imaging showed crucial importance in the determination of HPV status, albeit with lower AUC values. Conclusions: Radiomic features from conventional imaging can play a complementary role in the assessment of HPV status in OPSCC. Both primary tumor- and nodal-related features and multisequencing-based models demonstrated higher accuracy. Full article
(This article belongs to the Special Issue Advances in Radiation Therapy for Tumor Treatment)
Show Figures

Figure 1

Previous Issue
Back to TopTop