Next Issue
Volume 16, March
Previous Issue
Volume 16, January
 
 

Algorithms, Volume 16, Issue 2 (February 2023) – 64 articles

Cover Story (view full-size image): We explored the ability of a deep learning algorithm to segment ancient Egyptian hieroglyphs present in an image. The issue is complex, the main obstacles being the high number of different classes of existing hieroglyphs and the differences related to the hand of the scribe, as well as the great differences among the various supports, such as papyri, stone or wood, where they are written. Furthermore, deterioration to the supports occurs frequently in all archaeological findings, which has the effect of partially corrupting the hieroglyphs. We leveraged the well-known Detectron2 platform to tackle this difficult challenge, focusing on the Mask R-CNN architecture to carry out image instance segmentation. The results show good achievements as well as the current limitations of our study. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 7764 KiB  
Article
Fourier Neural Operator Network for Fast Photoacoustic Wave Simulations
by Steven Guan, Ko-Tsung Hsu and Parag V. Chitnis
Algorithms 2023, 16(2), 124; https://doi.org/10.3390/a16020124 - 19 Feb 2023
Cited by 4 | Viewed by 2612
Abstract
Simulation tools for photoacoustic wave propagation have played a key role in advancing photoacoustic imaging by providing quantitative and qualitative insights into parameters affecting image quality. Classical methods for numerically solving the photoacoustic wave equation rely on a fine discretization of space and [...] Read more.
Simulation tools for photoacoustic wave propagation have played a key role in advancing photoacoustic imaging by providing quantitative and qualitative insights into parameters affecting image quality. Classical methods for numerically solving the photoacoustic wave equation rely on a fine discretization of space and can become computationally expensive for large computational grids. In this work, we applied Fourier Neural Operator (FNO) networks as a fast data-driven deep learning method for solving the 2D photoacoustic wave equation in a homogeneous medium. Comparisons between the FNO network and pseudo-spectral time domain approach were made for the forward and adjoint simulations. Results demonstrate that the FNO network generated comparable simulations with small errors and was orders of magnitude faster than the pseudo-spectral time domain methods (~26× faster on a 64 × 64 computational grid and ~15× faster on a 128 × 128 computational grid). Moreover, the FNO network was generalizable to the unseen out-of-domain test set with a root-mean-square error of 9.5 × 10−3 in Shepp–Logan, 1.5 × 10−2 in synthetic vasculature, 1.1 × 10−2 in tumor and 1.9 × 10−2 in Mason-M phantoms on a 64 × 64 computational grid and a root mean squared of 6.9 ± 5.5 × 10−3 in the AWA2 dataset on a 128 × 128 computational grid. Full article
(This article belongs to the Special Issue Deep Learning Architecture and Applications)
Show Figures

Figure 1

20 pages, 3886 KiB  
Article
IRONEDGE: Stream Processing Architecture for Edge Applications
by João Pedro Vitorino, José Simão, Nuno Datia and Matilde Pato
Algorithms 2023, 16(2), 123; https://doi.org/10.3390/a16020123 - 17 Feb 2023
Viewed by 1449
Abstract
This paper presents IRONEDGE, an architectural framework that can be used in different edge Stream Processing solutions for “Smart Infrastructure” scenarios, on a case-by-case basis. The architectural framework identifies the common components that any such solution should implement and a generic processing pipeline. [...] Read more.
This paper presents IRONEDGE, an architectural framework that can be used in different edge Stream Processing solutions for “Smart Infrastructure” scenarios, on a case-by-case basis. The architectural framework identifies the common components that any such solution should implement and a generic processing pipeline. In particular, the framework is considered in the context of a study case regarding Internet of Things (IoT) devices to be attached to rolling stock in a railway. A lack of computation and storage resources available in edge devices and infrequent network connectivity are not often seen in the existing literature, but were considered in this paper. Two distinct implementations of IRONEDGE were considered and tested. One, identified as Apache Kafka with Kafka Connect (K0-WC), uses Kafka Connect to pass messages from MQ Telemetry Transport (MQTT) to Apache Kafka. The second scenario, identified as Apache Kafka with No Kafka Connect (K1-NC), allows Apache Storm to consume messages directly. When the data rate increased, K0-WC showed low throughput resulting from high losses, whereas K1-NC displayed an increase in throughput, but did not match the input rate for the Data Reports. The results showed that the framework can be used for defining new solutions for edge Stream Processing scenarios and identified a reference implementation for the considered study case. In future work, the authors propose to extend the evaluation of the architectural variation of K1-NC. Full article
Show Figures

Figure 1

30 pages, 11833 KiB  
Article
Integral Backstepping Control Algorithm for a Quadrotor Positioning Flight Task: A Design Issue Discussion
by Yang-Rui Li, Chih-Chia Chen and Chao-Chung Peng
Algorithms 2023, 16(2), 122; https://doi.org/10.3390/a16020122 - 16 Feb 2023
Cited by 2 | Viewed by 1989
Abstract
For quadrotor control applications, it is necessary to rely on attitude angle changes to indirectly achieve the position trajectory tracking purpose. Several existing literature studies omit the non-negligible attitude transients in the position controller design for this kind of cascade system. The result [...] Read more.
For quadrotor control applications, it is necessary to rely on attitude angle changes to indirectly achieve the position trajectory tracking purpose. Several existing literature studies omit the non-negligible attitude transients in the position controller design for this kind of cascade system. The result leads to the position tracking performance not being as good as expected. In fact, the transient behavior of the attitude tracking response cannot be ignored. Therefore, the closed-loop stability of the attitude loop as well as the position tracking should be considered simultaneously. In this study, the flight controller design of the position and attitude control loops is presented based on an integral backstepping control algorithm. This control algorithm relies on the derivatives of the associated virtual control laws for implementation. Examining existing literature, the derivatives of the virtual control law are realized approximated by numerical differentiations. Nevertheless, in practical scenarios, the numerical differentiations will cause the chattering phenomenon of control signals in the presence of unavoidable measurement noise. The noise-induced control signals may further cause damage to the actuators or even diverge the system response. To address this issue, the analytic form for the derivative of the virtual control law is derived. The time derivative virtual control law is analyzed and split into the disturbance-independent compensable and disturbance-dependent non-compensable terms. By utilizing the compensable term, the control chattering due to the differentiation of the noise can be avoided significantly. The simulation results reveal that the proposed control algorithm has a better position tracking performance than the traditional dual-loop control scheme. Meanwhile, a relatively smooth control signal can be obtained for a realistic control algorithm realization. Simulations are provided to illustrate the position tracking issue of a quadrotor and to demonstrate the effectiveness of the proposed compromised control scheme. Full article
(This article belongs to the Collection Feature Paper in Algorithms and Complexity Theory)
Show Figures

Figure 1

21 pages, 6631 KiB  
Article
Interpretation for Variational Autoencoder Used to Generate Financial Synthetic Tabular Data
by Jinhong Wu, Konstantinos Plataniotis, Lucy Liu, Ehsan Amjadian and Yuri Lawryshyn
Algorithms 2023, 16(2), 121; https://doi.org/10.3390/a16020121 - 16 Feb 2023
Cited by 2 | Viewed by 3173
Abstract
Synthetic data, artificially generated by computer programs, has become more widely used in the financial domain to mitigate privacy concerns. Variational Autoencoder (VAE) is one of the most popular deep-learning models for generating synthetic data. However, VAE is often considered a “black box” [...] Read more.
Synthetic data, artificially generated by computer programs, has become more widely used in the financial domain to mitigate privacy concerns. Variational Autoencoder (VAE) is one of the most popular deep-learning models for generating synthetic data. However, VAE is often considered a “black box” due to its opaqueness. Although some studies have been conducted to provide explanatory insights into VAE, research focusing on explaining how the input data could influence VAE to create synthetic data, especially for tabular data, is still lacking. However, in the financial industry, most data are stored in a tabular format. This paper proposes a sensitivity-based method to assess the impact of inputted tabular data on how VAE synthesizes data. This sensitivity-based method can provide both global and local interpretations efficiently and intuitively. To test this method, a simulated dataset and three Kaggle banking tabular datasets were employed. The results confirmed the applicability of this proposed method. Full article
(This article belongs to the Special Issue Interpretability, Accountability and Robustness in Machine Learning)
Show Figures

Figure 1

17 pages, 9172 KiB  
Article
Rapid Prototyping of H∞ Algorithm for Real-Time Displacement Volume Control of Axial Piston Pumps
by Alexander Mitov, Tsonyo Slavov and Jordan Kralev
Algorithms 2023, 16(2), 120; https://doi.org/10.3390/a16020120 - 15 Feb 2023
Cited by 3 | Viewed by 1846
Abstract
A system for the rapid prototyping of real-time control algorithms for open-circuit variable displacement axial-piston pumps is presented. In order to establish real-time control, and communication and synchronization with the programmable logic controller of an axial piston pump, the custom CAN communication protocol [...] Read more.
A system for the rapid prototyping of real-time control algorithms for open-circuit variable displacement axial-piston pumps is presented. In order to establish real-time control, and communication and synchronization with the programmable logic controller of an axial piston pump, the custom CAN communication protocol is developed. This protocol is realized as a Simulink® S-function, which is a part of main Simulink® model. This model works in real-time and allows for the implementation of rapid prototyping of various control strategies including advanced algorithms such as H∞ control. The aim of the algorithm is to achieve control system performance in the presence of various load disturbances with an admissible control signal rate and amplitude. In contrast to conventional systems, the developed solution suggests using an embedded approach for the prototyping of various algorithms. The obtained results show the advantages of the designed H∞ controller that ensure the robustness of a closed-loop system in the presence of significant load disturbances. These type of systems with displacement volume regulation are important for industrial hydraulic drive systems with relatively high power. Full article
(This article belongs to the Collection Feature Papers in Algorithms)
Show Figures

Figure 1

14 pages, 2032 KiB  
Article
Periodicity Intensity Reveals Insights into Time Series Data: Three Use Cases
by Alan F. Smeaton and Feiyan Hu
Algorithms 2023, 16(2), 119; https://doi.org/10.3390/a16020119 - 15 Feb 2023
Cited by 2 | Viewed by 1529
Abstract
Periodic phenomena are oscillating signals found in many naturally occurring time series. A periodogram can be used to measure the intensities of oscillations at different frequencies over an entire time series, but sometimes, we are interested in measuring how periodicity intensity at a [...] Read more.
Periodic phenomena are oscillating signals found in many naturally occurring time series. A periodogram can be used to measure the intensities of oscillations at different frequencies over an entire time series, but sometimes, we are interested in measuring how periodicity intensity at a specific frequency varies throughout the time series. This can be performed by calculating periodicity intensity within a window, then sliding and recalculating the intensity for the window, giving an indication of how periodicity intensity at a specific frequency changes throughout the series. We illustrate three applications of this, the first of which are the movements of a herd of new-born calves, where we show how intensity in the 24 h periodicity increases and decreases synchronously across the herd. We also show how changes in 24 h periodicity intensity of activities detected from in-home sensors can be indicative of overall wellness. We illustrate this on several weeks of sensor data gathered from each of the homes of 23 older adults. Our third application is the intensity of the 7-day periodicity of hundreds of University students accessing online resources from a virtual learning environment (VLE) and how the regularity of their weekly learning behaviours changes throughout a teaching semester. The paper demonstrates how periodicity intensity reveals insights into time series data not visible using other forms of analysis. Full article
(This article belongs to the Special Issue Machine Learning for Time Series Analysis)
Show Figures

Figure 1

12 pages, 1947 KiB  
Article
EEG Data Augmentation for Emotion Recognition with a Task-Driven GAN
by Qing Liu, Jianjun Hao and Yijun Guo
Algorithms 2023, 16(2), 118; https://doi.org/10.3390/a16020118 - 15 Feb 2023
Cited by 2 | Viewed by 1981
Abstract
The high cost of acquiring training data in the field of emotion recognition based on electroencephalogram (EEG) is a problem, making it difficult to establish a high-precision model from EEG signals for emotion recognition tasks. Given the outstanding performance of generative adversarial networks [...] Read more.
The high cost of acquiring training data in the field of emotion recognition based on electroencephalogram (EEG) is a problem, making it difficult to establish a high-precision model from EEG signals for emotion recognition tasks. Given the outstanding performance of generative adversarial networks (GANs) in data augmentation in recent years, this paper proposes a task-driven method based on CWGAN to generate high-quality artificial data. The generated data are represented as multi-channel EEG data differential entropy feature maps, and a task network (emotion classifier) is introduced to guide the generator during the adversarial training. The evaluation results show that the proposed method can generate artificial data with clearer classifications and distributions that are more similar to the real data, resulting in obvious improvements in EEG-based emotion recognition tasks. Full article
(This article belongs to the Special Issue Deep Neural Networks and Optimization Algorithms)
Show Figures

Figure 1

12 pages, 747 KiB  
Article
Extrinsic Bayesian Optimization on Manifolds
by Yihao Fang, Mu Niu, Pokman Cheung and Lizhen Lin
Algorithms 2023, 16(2), 117; https://doi.org/10.3390/a16020117 - 15 Feb 2023
Viewed by 1462
Abstract
We propose an extrinsic Bayesian optimization (eBO) framework for general optimization problems on manifolds. Bayesian optimization algorithms build a surrogate of the objective function by employing Gaussian processes and utilizing the uncertainty in that surrogate by deriving an acquisition function. This acquisition function [...] Read more.
We propose an extrinsic Bayesian optimization (eBO) framework for general optimization problems on manifolds. Bayesian optimization algorithms build a surrogate of the objective function by employing Gaussian processes and utilizing the uncertainty in that surrogate by deriving an acquisition function. This acquisition function represents the probability of improvement based on the kernel of the Gaussian process, which guides the search in the optimization process. The critical challenge for designing Bayesian optimization algorithms on manifolds lies in the difficulty of constructing valid covariance kernels for Gaussian processes on general manifolds. Our approach is to employ extrinsic Gaussian processes by first embedding the manifold onto some higher dimensional Euclidean space via equivariant embeddings and then constructing a valid covariance kernel on the image manifold after the embedding. This leads to efficient and scalable algorithms for optimization over complex manifolds. Simulation study and real data analyses are carried out to demonstrate the utilities of our eBO framework by applying the eBO to various optimization problems over manifolds such as the sphere, the Grassmannian, and the manifold of positive definite matrices. Full article
(This article belongs to the Special Issue Gradient Methods for Optimization)
Show Figures

Figure 1

14 pages, 15445 KiB  
Article
PigSNIPE: Scalable Neuroimaging Processing Engine for Minipig MRI
by Michal Brzus, Kevin Knoernschild, Jessica C. Sieren and Hans J. Johnson
Algorithms 2023, 16(2), 116; https://doi.org/10.3390/a16020116 - 15 Feb 2023
Cited by 1 | Viewed by 1332
Abstract
Translation of basic animal research to find effective methods of diagnosing and treating human neurological disorders requires parallel analysis infrastructures. Small animals such as mice provide exploratory animal disease models. However, many interventions developed using small animal models fail to translate to human [...] Read more.
Translation of basic animal research to find effective methods of diagnosing and treating human neurological disorders requires parallel analysis infrastructures. Small animals such as mice provide exploratory animal disease models. However, many interventions developed using small animal models fail to translate to human use due to physical or biological differences. Recently, large-animal minipigs have emerged in neuroscience due to both their brain similarity and economic advantages. Medical image processing is a crucial part of research, as it allows researchers to monitor their experiments and understand disease development. By pairing four reinforcement learning models and five deep learning UNet segmentation models with existing algorithms, we developed PigSNIPE, a pipeline for the automated handling, processing, and analyzing of large-scale data sets of minipig MR images. PigSNIPE allows for image registration, AC-PC alignment, detection of 19 anatomical landmarks, skull stripping, brainmask and intracranial volume segmentation (DICE 0.98), tissue segmentation (DICE 0.82), and caudate-putamen brain segmentation (DICE 0.8) in under two minutes. To the best of our knowledge, this is the first automated pipeline tool aimed at large animal images, which can significantly reduce the time and resources needed for analyzing minipig neuroimages. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Big Data Analysis)
Show Figures

Figure 1

20 pages, 3553 KiB  
Article
An Energy-Aware Load Balancing Method for IoT-Based Smart Recycling Machines Using an Artificial Chemical Reaction Optimization Algorithm
by Sara Tabaghchi Milan, Mehdi Darbandi, Nima Jafari Navimipour and Senay Yalcın
Algorithms 2023, 16(2), 115; https://doi.org/10.3390/a16020115 - 14 Feb 2023
Cited by 1 | Viewed by 1430
Abstract
Recycling is very important for a sustainable and clean environment. Developed and developing countries are both facing the problem of waste management and recycling issues. On the other hand, the Internet of Things (IoT) is a famous and applicable infrastructure used to provide [...] Read more.
Recycling is very important for a sustainable and clean environment. Developed and developing countries are both facing the problem of waste management and recycling issues. On the other hand, the Internet of Things (IoT) is a famous and applicable infrastructure used to provide connection between physical devices. It is an important technology that has been researched and implemented in recent years that promises to positively influence several industries, including recycling and trash management. The impact of the IoT on recycling and waste management is examined using standard operating practices in recycling. Recycling facilities, for instance, can use IoT to manage and keep an eye on the recycling situation in various places while allocating the logistics for transportation and distribution processes to minimize recycling costs and lead times. So, companies can use historical patterns to track usage trends in their service regions, assess their accessibility to gather resources, and arrange their activities accordingly. Additionally, energy is a significant aspect of the IoT since several devices will be linked to the internet, and the devices, sensors, nodes, and objects are all energy-restricted. Because the devices are constrained by their nature, the load-balancing protocol is crucial in an IoT ecosystem. Due to the importance of this issue, this study presents an energy-aware load-balancing method for IoT-based smart recycling machines using an artificial chemical reaction optimization algorithm. The experimental results indicated that the proposed solution could achieve excellent performance. According to the obtained results, the imbalance degree (5.44%), energy consumption (11.38%), and delay time (9.05%) were reduced using the proposed method. Full article
(This article belongs to the Special Issue AI-Based Algorithms in IoT-Edge Computing)
Show Figures

Figure 1

17 pages, 3850 KiB  
Article
On-Board Decentralized Observation Planning for LEO Satellite Constellations
by Bingyu Song, Yingwu Chen, Qing Yang, Yahui Zuo, Shilong Xu and Yuning Chen
Algorithms 2023, 16(2), 114; https://doi.org/10.3390/a16020114 - 14 Feb 2023
Cited by 2 | Viewed by 1480
Abstract
The multi-satellite on-board observation planning (MSOOP) is a variant of the multi-agent task allocation problem (MATAP). MSOOP is used to complete the observation task allocation in a fully cooperative mode to maximize the profits of the whole system. In this paper, MSOOP for [...] Read more.
The multi-satellite on-board observation planning (MSOOP) is a variant of the multi-agent task allocation problem (MATAP). MSOOP is used to complete the observation task allocation in a fully cooperative mode to maximize the profits of the whole system. In this paper, MSOOP for LEO satellite constellations is investigated, and the decentralized algorithm is exploited for solving it. The problem description of MSOOP for LEO satellite constellations is detailed. The coupled constraints make MSOOP more complex than other task allocation problems. The improved Consensus-Based Bundle Algorithm (ICBBA), which includes a bundle construction phase and consensus check phase, is proposed. A constraint check and a mask recovery are introduced into bundle construction and consensus check to handle the coupled constraints. The fitness function is adjusted to adapt to the characteristics of different scenes. Experimental results on series instances demonstrate the effectiveness of the proposed algorithm. Full article
Show Figures

Figure 1

20 pages, 6726 KiB  
Article
Examination of Lemon Bruising Using Different CNN-Based Classifiers and Local Spectral-Spatial Hyperspectral Imaging
by Razieh Pourdarbani, Sajad Sabzi, Mohsen Dehghankar, Mohammad H. Rohban and Juan I. Arribas
Algorithms 2023, 16(2), 113; https://doi.org/10.3390/a16020113 - 14 Feb 2023
Cited by 5 | Viewed by 2145
Abstract
The presence of bruises on fruits often indicates cell damage, which can lead to a decrease in the ability of the peel to keep oxygen away from the fruits, and as a result, oxygen breaks down cell walls and membranes damaging fruit content. [...] Read more.
The presence of bruises on fruits often indicates cell damage, which can lead to a decrease in the ability of the peel to keep oxygen away from the fruits, and as a result, oxygen breaks down cell walls and membranes damaging fruit content. When chemicals in the fruit are oxidized by enzymes such as polyphenol oxidase, the chemical reaction produces an undesirable and apparent brown color effect, among others. Early detection of bruising prevents low-quality fruit from entering the consumer market. Hereupon, the present paper aims at early identification of bruised lemon fruits using 3D-convolutional neural networks (3D-CNN) via a local spectral-spatial hyperspectral imaging technique, which takes into account adjacent image pixel information in both the frequency (wavelength) and spatial domains of a 3D-tensor hyperspectral image of input lemon fruits. A total of 70 sound lemons were picked up from orchards. First, all fruits were labeled and the hyperspectral images (wavelength range 400–1100 nm) were captured as belonging to the healthy (unbruised) class (class label 0). Next, bruising was applied to each lemon by freefall. Then, the hyperspectral images of all bruised samples were captured in a time gap of 8 (class label 1) and 16 h (class label 2) after bruising was induced, thus resulting in a 3-class ternary classification problem. Four well-known 3D-CNN model namely ResNet, ShuffleNet, DenseNet, and MobileNet were used to classify bruised lemons in Python. Results revealed that the highest classification accuracy (90.47%) was obtained by the ResNet model, followed by DenseNet (85.71%), ShuffleNet (80.95%) and MobileNet (73.80%); all over the test set. ResNet model had larger parameter sizes, but it was proven to be trained faster than other models with fewer number of free parameters. ShuffleNet and MobileNet were easier to train and they needed less storage, but they could not achieve a classification error as low as the other two counterparts. Full article
Show Figures

Figure 1

19 pages, 4618 KiB  
Article
V-SOC4AS: A Vehicle-SOC for Improving Automotive Security
by Vita Santa Barletta, Danilo Caivano, Mirko De Vincentiis, Azzurra Ragone, Michele Scalera and Manuel Ángel Serrano Martín
Algorithms 2023, 16(2), 112; https://doi.org/10.3390/a16020112 - 14 Feb 2023
Cited by 12 | Viewed by 2715
Abstract
Integrating embedded systems into next-generation vehicles is proliferating as they increase safety, efficiency, and driving comfort. These functionalities are provided by hundreds of electronic control units (ECUs) that communicate with each other using various protocols that, if not properly designed, may be vulnerable [...] Read more.
Integrating embedded systems into next-generation vehicles is proliferating as they increase safety, efficiency, and driving comfort. These functionalities are provided by hundreds of electronic control units (ECUs) that communicate with each other using various protocols that, if not properly designed, may be vulnerable to local or remote attacks. The paper presents a vehicle-security operation center for improving automotive security (V-SOC4AS) to enhance the detection, response, and prevention of cyber-attacks in the automotive context. The goal is to monitor in real-time each subsystem of intra-vehicle communication, that is controller area network (CAN), local interconnect network (LIN), FlexRay, media oriented systems transport (MOST), and Ethernet. Therefore, to achieve this goal, security information and event management (SIEM) was used to monitor and detect malicious attacks in intra-vehicle and inter-vehicle communications: messages transmitted between vehicle ECUs; infotainment and telematics systems, which provide passengers with entertainment capabilities and information about the vehicle system; and vehicular ports, which allow vehicles to connect to diagnostic devices, upload content of various types. As a result, this allows the automation and improvement of threat detection and incident response processes. Furthermore, the V-SOC4AS allows the classification of the received message as malicious and non-malicious and acquisition of additional information about the type of attack. Thus, this reduces the detection time and provides more support for response activities. Experimental evaluation was conducted on two state-of-the-art attacks: denial of service (DoS) and fuzzing. An open-source dataset was used to simulate the vehicles. V-SOC4AS exploits security information and event management to analyze the packets sent by a vehicle using a rule-based mechanism. If the payload contains a CAN frame attack, it is notified to the SOC analysts. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

15 pages, 574 KiB  
Article
Learning Data for Neural-Network-Based Numerical Solution of PDEs: Application to Dirichlet-to-Neumann Problems
by Ferenc Izsák and Taki Eddine Djebbar
Algorithms 2023, 16(2), 111; https://doi.org/10.3390/a16020111 - 14 Feb 2023
Cited by 1 | Viewed by 1271
Abstract
We propose neural-network-based algorithms for the numerical solution of boundary-value problems for the Laplace equation. Such a numerical solution is inherently mesh-free, and in the approximation process, stochastic algorithms are employed. The chief challenge in the solution framework is to generate appropriate learning [...] Read more.
We propose neural-network-based algorithms for the numerical solution of boundary-value problems for the Laplace equation. Such a numerical solution is inherently mesh-free, and in the approximation process, stochastic algorithms are employed. The chief challenge in the solution framework is to generate appropriate learning data in the absence of the solution. Our main idea was to use fundamental solutions for this purpose and make a link with the so-called method of fundamental solutions. In this way, beyond the classical boundary-value problems, Dirichlet-to-Neumann operators can also be approximated. This problem was investigated in detail. Moreover, for this complex problem, low-rank approximations were constructed. Such efficient solution algorithms can serve as a basis for computational electrical impedance tomography. Full article
(This article belongs to the Special Issue Computational Methods and Optimization for Numerical Analysis)
Show Figures

Figure 1

13 pages, 1355 KiB  
Article
Model Parallelism Optimization for CNN FPGA Accelerator
by Jinnan Wang, Weiqin Tong and Xiaoli Zhi
Algorithms 2023, 16(2), 110; https://doi.org/10.3390/a16020110 - 14 Feb 2023
Cited by 7 | Viewed by 1852
Abstract
Convolutional neural networks (CNNs) have made impressive achievements in image classification and object detection. For hardware with limited resources, it is not easy to achieve CNN inference with a large number of parameters without external storage. Model parallelism is an effective way to [...] Read more.
Convolutional neural networks (CNNs) have made impressive achievements in image classification and object detection. For hardware with limited resources, it is not easy to achieve CNN inference with a large number of parameters without external storage. Model parallelism is an effective way to reduce resource usage by distributing CNN inference among several devices. However, parallelizing a CNN model is not easy, because CNN models have an essentially tightly-coupled structure. In this work, we propose a novel model parallelism method to decouple the CNN structure with group convolution and a new channel shuffle procedure. Our method could eliminate inter-device synchronization while reducing the memory footprint of each device. Using the proposed model parallelism method, we designed a parallel FPGA accelerator for the classic CNN model ShuffleNet. This accelerator was further optimized with features such as aggregate read and kernel vectorization to fully exploit the hardware-level parallelism of the FPGA. We conducted experiments with ShuffleNet on two FPGA boards, each of which had an Intel Arria 10 GX1150 and 16GB DDR3 memory. The experimental results showed that when using two devices, ShuffleNet achieved a 1.42× speed increase and reduced its memory footprint by 34%, as compared to its non-parallel counterpart, while maintaining accuracy. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

22 pages, 710 KiB  
Article
Towards a Flexible Assessment of Compliance with Clinical Protocols Using Fuzzy Aggregation Techniques
by Anna Wilbik, Irene Vanderfeesten, Dennis Bergmans, Serge Heines, Oktay Turetken and Walther van Mook
Algorithms 2023, 16(2), 109; https://doi.org/10.3390/a16020109 - 13 Feb 2023
Cited by 1 | Viewed by 1864
Abstract
In healthcare settings, compliance with clinical protocols and medical guidelines is important to ensure high-quality, safe and effective treatment of patients. How to measure compliance and how to represent compliance information in an interpretable and actionable way is still an open challenge. In [...] Read more.
In healthcare settings, compliance with clinical protocols and medical guidelines is important to ensure high-quality, safe and effective treatment of patients. How to measure compliance and how to represent compliance information in an interpretable and actionable way is still an open challenge. In this paper, we propose new metrics for compliance assessments. For this purpose, we use two fuzzy aggregation techniques, namely the OWA operator and the Sugeno integral. The proposed measures take into consideration three factors: (i) the degree of compliance with a single activity, (ii) the degree of compliance of a patient, and (iii) the importance of the activities. The proposed measures are applied to two clinical protocols used in practice. We demonstrate that the proposed measures for compliance can further aid clinicians in assessing the aspect of protocol compliance when evaluating the effectiveness of implemented clinical protocols. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms for Healthcare)
Show Figures

Figure 1

23 pages, 4918 KiB  
Review
Algorithms in Low-Code-No-Code for Research Applications: A Practical Review
by Fahim Sufi
Algorithms 2023, 16(2), 108; https://doi.org/10.3390/a16020108 - 13 Feb 2023
Cited by 21 | Viewed by 7577
Abstract
Algorithms have evolved from machine code to low-code-no-code (LCNC) in the past 20 years. Observing the growth of LCNC-based algorithm development, the CEO of GitHub mentioned that the future of coding is no coding at all. This paper systematically reviewed several of the [...] Read more.
Algorithms have evolved from machine code to low-code-no-code (LCNC) in the past 20 years. Observing the growth of LCNC-based algorithm development, the CEO of GitHub mentioned that the future of coding is no coding at all. This paper systematically reviewed several of the recent studies using mainstream LCNC platforms to understand the area of research, the LCNC platforms used within these studies, and the features of LCNC used for solving individual research questions. We identified 23 research works using LCNC platforms, such as SetXRM, the vf-OS platform, Aure-BPM, CRISP-DM, and Microsoft Power Platform (MPP). About 61% of these existing studies resorted to MPP as their primary choice. The critical research problems solved by these research works were within the area of global news analysis, social media analysis, landslides, tornadoes, COVID-19, digitization of process, manufacturing, logistics, and software/app development. The main reasons identified for solving research problems with LCNC algorithms were as follows: (1) obtaining research data from multiple sources in complete automation; (2) generating artificial intelligence-driven insights without having to manually code them. In the course of describing this review, this paper also demonstrates a practical approach to implement a cyber-attack monitoring algorithm with the most popular LCNC platform. Full article
(This article belongs to the Collection Featured Reviews of Algorithms)
Show Figures

Figure 1

14 pages, 6333 KiB  
Article
Self-Sustainability Assessment for a High Building Based on Linear Programming and Computational Fluid Dynamics
by Carlos Oliveira, José Baptista and Adelaide Cerveira
Algorithms 2023, 16(2), 107; https://doi.org/10.3390/a16020107 - 13 Feb 2023
Cited by 2 | Viewed by 1377
Abstract
With excess energy use from non-renewable sources, new energy generation solutions must be adopted to make up for this excess. In this sense, the integration of renewable energy sources in high-rise buildings reduces the need for energy from the national power grid to [...] Read more.
With excess energy use from non-renewable sources, new energy generation solutions must be adopted to make up for this excess. In this sense, the integration of renewable energy sources in high-rise buildings reduces the need for energy from the national power grid to maximize the self-sustainability of common services. Moreover, self-consumption in low-voltage and medium-voltage networks strongly facilitates a reduction in external energy dependence. For consumers, the benefits of installing small wind turbines and energy storage systems include tax benefits and reduced electricity bills as well as a profitable system after the payback period. This paper focuses on assessing the wind potential in a high-rise building through computational fluid dynamics (CFD) simulations, quantifying the potential for wind energy production by small wind turbines (WT) at the installation site. Furthermore, a mathematical model is proposed to optimize wind energy production for a self-consumption system to minimize the total cost of energy purchased from the grid, maximizing the return on investment. The potential of a CFD-based project practice that has wide application in developing the most varied processes and equipment results in a huge reduction in the time and costs spent compared to conventional practices. Furthermore, the optimization model guarantees a significant decrease in the energy purchased at peak hours through the energy stored in energy storage systems (ESS). The results show that the efficiency of the proposed model leads to an investment amortization period of 7 years for a lifetime of 20 years. Full article
(This article belongs to the Special Issue Optimization in Renewable Energy Systems)
Show Figures

Figure 1

3 pages, 177 KiB  
Editorial
Special Issue on Logic-Based Artificial Intelligence
by Giovanni Amendola
Algorithms 2023, 16(2), 106; https://doi.org/10.3390/a16020106 - 13 Feb 2023
Viewed by 1641
Abstract
Since its inception, research in the field of Artificial Intelligence (AI) has had a fundamentally logical approach; therefore, discussions have taken place to establish a way of distinguishing symbolic AI from sub-symbolic AI, basing the approach instead on the statistical approaches typical of [...] Read more.
Since its inception, research in the field of Artificial Intelligence (AI) has had a fundamentally logical approach; therefore, discussions have taken place to establish a way of distinguishing symbolic AI from sub-symbolic AI, basing the approach instead on the statistical approaches typical of machine learning, deep learning or Bayesian networks [...] Full article
(This article belongs to the Special Issue Logic-Based Artificial Intelligence)
34 pages, 870 KiB  
Article
Union Models for Model Families: Efficient Reasoning over Space and Time
by Sanaa Alwidian, Daniel Amyot and Yngve Lamo
Algorithms 2023, 16(2), 105; https://doi.org/10.3390/a16020105 - 11 Feb 2023
Cited by 1 | Viewed by 1505
Abstract
A model family is a set of related models in a given language, with commonalities and variabilities that result from evolution of models over time and/or variation over intended usage (the spatial dimension). As the family size increases, it becomes cumbersome to analyze [...] Read more.
A model family is a set of related models in a given language, with commonalities and variabilities that result from evolution of models over time and/or variation over intended usage (the spatial dimension). As the family size increases, it becomes cumbersome to analyze models individually. One solution is to represent a family using one global model that supports analysis. In this paper, we propose the concept of union model as a complete and concise representation of all members of a model family. We use graph theory to formalize a model family as a set of attributed typed graphs in which all models are typed over the same metamodel. The union model is formalized as the union of all graph elements in the family. These graph elements are annotated with their corresponding model versions and configurations. This formalization is independent from the modeling language used. We also demonstrate how union models can be used to perform reasoning tasks on model families, e.g., trend analysis and property checking. Empirical results suggest potential time-saving benefits when using union models for analysis and reasoning over a set of models all at once as opposed to separately analyzing single models one at a time. Full article
Show Figures

Figure 1

20 pages, 520 KiB  
Article
Quadratic Multilinear Discriminant Analysis for Tensorial Data Classification
by Cristian Minoccheri, Olivia Alge, Jonathan Gryak, Kayvan Najarian and Harm Derksen
Algorithms 2023, 16(2), 104; https://doi.org/10.3390/a16020104 - 11 Feb 2023
Cited by 1 | Viewed by 1258
Abstract
Over the past decades, there has been an increase of attention to adapting machine learning methods to fully exploit the higher order structure of tensorial data. One problem of great interest is tensor classification, and in particular the extension of linear discriminant analysis [...] Read more.
Over the past decades, there has been an increase of attention to adapting machine learning methods to fully exploit the higher order structure of tensorial data. One problem of great interest is tensor classification, and in particular the extension of linear discriminant analysis to the multilinear setting. We propose a novel method for multilinear discriminant analysis that is radically different from the ones considered so far, and it is the first extension to tensors of quadratic discriminant analysis. Our proposed approach uses invariant theory to extend the nearest Mahalanobis distance classifier to the higher-order setting, and to formulate a well-behaved optimization problem. We extensively test our method on a variety of synthetic data, outperforming previously proposed MDA techniques. We also show how to leverage multi-lead ECG data by constructing tensors via taut string, and use our method to classify healthy signals versus unhealthy ones; our method outperforms state-of-the-art MDA methods, especially after adding significant levels of noise to the signals. Our approach reached an AUC of 0.95(0.03) on clean signals—where the second best method reached 0.91(0.03)—and an AUC of 0.89(0.03) after adding noise to the signals (with a signal-to-noise-ratio of 30)—where the second best method reached 0.85(0.05). Our approach is fundamentally different than previous work in this direction, and proves to be faster, more stable, and more accurate on the tests we performed. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

18 pages, 633 KiB  
Article
Local Convergence Analysis of a One Parameter Family of Simultaneous Methods with Applications to Real-World Problems
by Tsonyo M. Pavkov, Valentin G. Kabadzhov, Ivan K. Ivanov and Stoil I. Ivanov
Algorithms 2023, 16(2), 103; https://doi.org/10.3390/a16020103 - 10 Feb 2023
Cited by 3 | Viewed by 1114
Abstract
In this paper, we provide a detailed local convergence analysis of a one-parameter family of iteration methods for the simultaneous approximation of polynomial zeros due to Ivanov (Numer. Algor. 75(4): 1193–1204, 2017). Thus, we obtain two local convergence theorems that provide sufficient conditions [...] Read more.
In this paper, we provide a detailed local convergence analysis of a one-parameter family of iteration methods for the simultaneous approximation of polynomial zeros due to Ivanov (Numer. Algor. 75(4): 1193–1204, 2017). Thus, we obtain two local convergence theorems that provide sufficient conditions to guarantee the Q-cubic convergence of all members of the family. Among the other contributions, our results unify the latest such kind of results of the well known Dochev–Byrnev and Ehrlich methods. Several practical applications are further given to emphasize the advantages of the studied family of methods and to show the applicability of the theoretical results. Full article
Show Figures

Figure 1

19 pages, 434 KiB  
Article
Metamorphic Testing of Relation Extraction Models
by Yuhe Sun, Zuohua Ding, Hongyun Huang, Senhao Zou and Mingyue Jiang
Algorithms 2023, 16(2), 102; https://doi.org/10.3390/a16020102 - 10 Feb 2023
Viewed by 1311
Abstract
Relation extraction (RE) is a fundamental NLP task that aims to identify relations between some entities regarding a given text. RE forms the basis for many advanced NLP tasks, such as question answering and text summarization, and thus its quality is critical to [...] Read more.
Relation extraction (RE) is a fundamental NLP task that aims to identify relations between some entities regarding a given text. RE forms the basis for many advanced NLP tasks, such as question answering and text summarization, and thus its quality is critical to the relevant downstream applications. However, evaluating the quality of RE models is non-trivial. On the one hand, obtaining ground truth labels for individual test inputs is tedious and even difficult. On the other hand, there is an increasing need to understand the characteristics of RE models in terms of various aspects. To mitigate these issues, this study proposes evaluating RE models by applying metamorphic testing (MT). A total of eight metamorphic relations (MRs) are identified based on three categories of transformation operations, namely replacement, swap, and combination. These MRs encode some expected properties of different aspects of RE. We further apply MT to three popular RE models. Our experiments reveal a large number of prediction failures in the subject RE models, confirming that MT is effective for evaluating RE models. Further analysis of the experimental results reveals the advantages and disadvantages of our subject models and also uncovers some typical issues of RE models. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

15 pages, 4402 KiB  
Article
Comparative Analysis of the Methods for Fiber Bragg Structures Spectrum Modeling
by Timur Agliullin, Vladimir Anfinogentov, Oleg Morozov, Airat Sakhabutdinov, Bulat Valeev, Ayna Niyazgulyeva and Yagmyrguly Garovov
Algorithms 2023, 16(2), 101; https://doi.org/10.3390/a16020101 - 10 Feb 2023
Cited by 6 | Viewed by 1417
Abstract
The work is dedicated to a comparative analysis of the following methods for fiber Bragg grating (FBG) spectral response modeling. The Layer Sweep (LS) method, which is similar to the common layer peeling algorithm, is based on the reflectance and transmittance determination for [...] Read more.
The work is dedicated to a comparative analysis of the following methods for fiber Bragg grating (FBG) spectral response modeling. The Layer Sweep (LS) method, which is similar to the common layer peeling algorithm, is based on the reflectance and transmittance determination for the plane waves propagating through layered structures, which results in the solution of a system of linear equations for the transmittance and reflectance of each layer using the sweep method. Another considered method is based on the determination of transfer matrices (TM) for the FBG as a whole. Firstly, a homogeneous FBG was modeled using both methods, and the resulting reflectance spectra were compared to the one obtained via a specialized commercial software package. Secondly, modeling results of a π-phase-shifted FBG were presented and discussed. For both FBG models, the influence of the partition interval of the LS method on the simulated spectrum was studied. Based on the analysis of the simulation data, additional required modeling conditions for phase-shifted FBGs were established, which enhanced the modeling performance of the LS method. Full article
(This article belongs to the Special Issue Algorithms and Calculations in Fiber Optics and Photonics)
Show Figures

Figure 1

43 pages, 506 KiB  
Review
Assembly and Production Line Designing, Balancing and Scheduling with Inaccurate Data: A Survey and Perspectives
by Yuri N. Sotskov
Algorithms 2023, 16(2), 100; https://doi.org/10.3390/a16020100 - 10 Feb 2023
Cited by 4 | Viewed by 3073
Abstract
Assembly lines (conveyors) are traditional means of large-scale and mass-scale productions. An assembly line balancing problem is needed for optimizing the assembly process by configuring and designing an assembly line for the same or similar types of final products. This problem consists of [...] Read more.
Assembly lines (conveyors) are traditional means of large-scale and mass-scale productions. An assembly line balancing problem is needed for optimizing the assembly process by configuring and designing an assembly line for the same or similar types of final products. This problem consists of designing the assembly line and distributing the total workload for manufacturing each unit of the fixed product to be assembled among the ordered workstations along the constructed assembly line. The assembly line balancing research is focused mainly on simple assembly line balancing problems, which are restricted by a set of conditions making a considered assembly line ideal for research. A lot of published research has been carried out in order to describe and solve (usually heuristically) more realistic generalized assembly line balancing problems. Assembly line designing, balancing and scheduling problems with not deterministic (stochastic, fuzzy or uncertain) parameters have been investigated in many published research works. This paper is about the design and optimization methods for assembly and disassembly lines. We survey the recent developments for designing, balancing and scheduling assembly (disassembly) lines. New formulations of simple assembly line balancing problems are presented in order to take into account modifications and uncertainties characterized by real assembly productions. Full article
(This article belongs to the Special Issue Scheduling: Algorithms and Applications)
28 pages, 953 KiB  
Article
Enhancing Logistic Regression Using Neural Networks for Classification in Actuarial Learning
by George Tzougas and Konstantin Kutzkov
Algorithms 2023, 16(2), 99; https://doi.org/10.3390/a16020099 - 09 Feb 2023
Cited by 4 | Viewed by 2898
Abstract
We developed a methodology for the neural network boosting of logistic regression aimed at learning an additional model structure from the data. In particular, we constructed two classes of neural network-based models: shallow–dense neural networks with one hidden layer and deep neural networks [...] Read more.
We developed a methodology for the neural network boosting of logistic regression aimed at learning an additional model structure from the data. In particular, we constructed two classes of neural network-based models: shallow–dense neural networks with one hidden layer and deep neural networks with multiple hidden layers. Furthermore, several advanced approaches were explored, including the combined actuarial neural network approach, embeddings and transfer learning. The model training was achieved by minimizing either the deviance or the cross-entropy loss functions, leading to fourteen neural network-based models in total. For illustrative purposes, logistic regression and the alternative neural network-based models we propose are employed for a binary classification exercise concerning the occurrence of at least one claim in a French motor third-party insurance portfolio. Finally, the model interpretability issue was addressed via the local interpretable model-agnostic explanations approach. Full article
(This article belongs to the Special Issue Deep Neural Networks and Optimization Algorithms)
Show Figures

Figure 1

17 pages, 1788 KiB  
Article
A Novel Intelligent Method for Fault Diagnosis of Steam Turbines Based on T-SNE and XGBoost
by Zhiguo Liang, Lijun Zhang and Xizhe Wang
Algorithms 2023, 16(2), 98; https://doi.org/10.3390/a16020098 - 09 Feb 2023
Cited by 5 | Viewed by 1687
Abstract
Since failure of steam turbines occurs frequently and can causes huge losses for thermal plants, it is important to identify a fault in advance. A novel clustering fault diagnosis method for steam turbines based on t-distribution stochastic neighborhood embedding (t-SNE) and extreme gradient [...] Read more.
Since failure of steam turbines occurs frequently and can causes huge losses for thermal plants, it is important to identify a fault in advance. A novel clustering fault diagnosis method for steam turbines based on t-distribution stochastic neighborhood embedding (t-SNE) and extreme gradient boosting (XGBoost) is proposed in this paper. First, the t-SNE algorithm was used to map the high-dimensional data to the low-dimensional space; and the data clustering method of K-means was performed in the low-dimensional space to distinguish the fault data from the normal data. Then, the imbalance problem in the data was processed by the synthetic minority over-sampling technique (SMOTE) algorithm to obtain the steam turbine characteristic data set with fault labels. Finally, the XGBoost algorithm was used to solve this multi-classification problem. The data set used in this paper was derived from the time series data of a steam turbine of a thermal power plant. In the processing analysis, the method achieved the best performance with an overall accuracy of 97% and an early warning of at least two hours in advance. The experimental results show that this method can effectively evaluate the condition and provide fault warning for power plant equipment. Full article
(This article belongs to the Special Issue Artificial Intelligence for Fault Detection and Diagnosis)
Show Figures

Figure 1

14 pages, 2372 KiB  
Article
Nemesis: Neural Mean Teacher Learning-Based Emotion-Centric Speaker
by Aryan Yousefi and Kalpdrum Passi
Algorithms 2023, 16(2), 97; https://doi.org/10.3390/a16020097 - 09 Feb 2023
Viewed by 1305
Abstract
Image captioning is the multi-modal task of automatically describing a digital image based on its contents and their semantic relationship. This research area has gained increasing popularity over the past few years; however, most of the previous studies have been focused on purely [...] Read more.
Image captioning is the multi-modal task of automatically describing a digital image based on its contents and their semantic relationship. This research area has gained increasing popularity over the past few years; however, most of the previous studies have been focused on purely objective content-based descriptions of the image scenes. In this study, efforts have been made to generate more engaging captions by leveraging human-like emotional responses. To achieve this task, a mean teacher learning-based method has been applied to the recently introduced ArtEmis dataset. ArtEmis is the first large-scale dataset for emotion-centric image captioning, containing 455K emotional descriptions of 80K artworks from WikiArt. This method includes a self-distillation relationship between memory-augmented language models with meshed connectivity. These language models are trained in a cross-entropy phase and then fine-tuned in a self-critical sequence training phase. According to various popular natural language processing metrics, such as BLEU, METEOR, ROUGE-L, and CIDEr, our proposed model has obtained a new state of the art on ArtEmis. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Big Data Analysis)
Show Figures

Figure 1

14 pages, 890 KiB  
Article
Image Quality Assessment for Gibbs Ringing Reduction
by Yue Wang and John J. Healy
Algorithms 2023, 16(2), 96; https://doi.org/10.3390/a16020096 - 09 Feb 2023
Cited by 3 | Viewed by 1404
Abstract
Gibbs ringing is an artefact that is inevitable in any imaging modality where the measurement is Fourier band-limited. It impacts the quality of the image by creating a ringing appearance around discontinuities. Many novel ways of suppressing the artefact have been proposed, including [...] Read more.
Gibbs ringing is an artefact that is inevitable in any imaging modality where the measurement is Fourier band-limited. It impacts the quality of the image by creating a ringing appearance around discontinuities. Many novel ways of suppressing the artefact have been proposed, including machine learning methods, but the quantitative comparisons of the results have frequently been lacking in rigour. In this paper, we examine image quality assessment metrics on three test images with different complexity. We determine six metrics which show promise for simultaneously assessing severity of Gibbs ringing and of other error such as blurring. We examined applying metrics to a region of interest around discontinuities in the image and use the metrics on the resulting region of interest. We demonstrate that the region of interest approach does not improve the performance of the metrics. Finally, we examine the effect of the error threshold parameter in two metrics. Our results will aid development of best practice in comparison of algorithms for the suppression of Gibbs ringing. Full article
Show Figures

Figure 1

30 pages, 3724 KiB  
Review
Defect Detection Methods for Industrial Products Using Deep Learning Techniques: A Review
by Alireza Saberironaghi, Jing Ren and Moustafa El-Gindy
Algorithms 2023, 16(2), 95; https://doi.org/10.3390/a16020095 - 08 Feb 2023
Cited by 23 | Viewed by 13916
Abstract
Over the last few decades, detecting surface defects has attracted significant attention as a challenging task. There are specific classes of problems that can be solved using traditional image processing techniques. However, these techniques struggle with complex textures in backgrounds, noise, and differences [...] Read more.
Over the last few decades, detecting surface defects has attracted significant attention as a challenging task. There are specific classes of problems that can be solved using traditional image processing techniques. However, these techniques struggle with complex textures in backgrounds, noise, and differences in lighting conditions. As a solution to this problem, deep learning has recently emerged, motivated by two main factors: accessibility to computing power and the rapid digitization of society, which enables the creation of large databases of labeled samples. This review paper aims to briefly summarize and analyze the current state of research on detecting defects using machine learning methods. First, deep learning-based detection of surface defects on industrial products is discussed from three perspectives: supervised, semi-supervised, and unsupervised. Secondly, the current research status of deep learning defect detection methods for X-ray images is discussed. Finally, we summarize the most common challenges and their potential solutions in surface defect detection, such as unbalanced sample identification, limited sample size, and real-time processing. Full article
(This article belongs to the Special Issue Deep Learning Architecture and Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop