Next Issue
Volume 16, November
Previous Issue
Volume 16, September
 
 

Algorithms, Volume 16, Issue 10 (October 2023) – 42 articles

Cover Story (view full-size image): A substantial amount of satellite imaging data is produced daily in remote sensing (RS), and improved methodologies and applications are required to mass label images for downstream machine learning. Curating and labelling such datasets is a time-consuming task for RS specialists. The proposed approach utilises autoencoders for learnt feature representation and subsequent manifold projection algorithms for two-dimensional exploration. Users interact with the visualization and label clusters based on their domain knowledge. Re-application of manifold projection can interactively refine subsets of clusters and achieve better class separation. Evaluation of the approach is conducted on real-world remote sensing satellite image datasets and demonstrates its effectiveness in achieving efficient image tile labelling. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 1864 KiB  
Article
COVID-19 Detection from Chest X-ray Images Based on Deep Learning Techniques
by Shubham Mathesul, Debabrata Swain, Santosh Kumar Satapathy, Ayush Rambhad, Biswaranjan Acharya, Vassilis C. Gerogiannis and Andreas Kanavos
Algorithms 2023, 16(10), 494; https://doi.org/10.3390/a16100494 - 23 Oct 2023
Cited by 1 | Viewed by 2012
Abstract
The COVID-19 pandemic has posed significant challenges in accurately diagnosing the disease, as severe cases may present symptoms similar to pneumonia. Real-Time Reverse Transcriptase Polymerase Chain Reaction (RT-PCR) is the conventional diagnostic technique; however, it has limitations in terms of time-consuming laboratory procedures [...] Read more.
The COVID-19 pandemic has posed significant challenges in accurately diagnosing the disease, as severe cases may present symptoms similar to pneumonia. Real-Time Reverse Transcriptase Polymerase Chain Reaction (RT-PCR) is the conventional diagnostic technique; however, it has limitations in terms of time-consuming laboratory procedures and kit availability. Radiological chest images, such as X-rays and Computed Tomography (CT) scans, have been essential in aiding the diagnosis process. In this research paper, we propose a deep learning (DL) approach based on Convolutional Neural Networks (CNNs) to enhance the detection of COVID-19 and its variants from chest X-ray images. Building upon the existing research in SARS and COVID-19 identification using AI and machine learning techniques, our DL model aims to extract the most significant features from the X-ray scans of affected individuals. By employing an explanatory CNN-based technique, we achieved a promising accuracy of up to 97% in detecting COVID-19 cases, which can assist physicians in effectively screening and identifying probable COVID-19 patients. This study highlights the potential of DL in medical imaging, specifically in detecting COVID-19 from radiological images. The improved accuracy of our model demonstrates its efficacy in aiding healthcare professionals and mitigating the spread of the disease. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Imaging)
Show Figures

Figure 1

15 pages, 3253 KiB  
Article
Remote Sensing of Snow Parameters: A Sensitivity Study of Retrieval Performance Based on Hyperspectral versus Multispectral Data
by Elliot Pachniak, Wei Li, Tomonori Tanikawa, Charles Gatebe and Knut Stamnes
Algorithms 2023, 16(10), 493; https://doi.org/10.3390/a16100493 - 23 Oct 2023
Cited by 1 | Viewed by 1167
Abstract
Snow parameters have traditionally been retrieved using discontinuous, multi-band sensors; however, continuous hyperspectral sensors are now being developed as an alternative. In this paper, we investigate the performance of various sensor configurations using machine learning neural networks trained on a simulated dataset. Our [...] Read more.
Snow parameters have traditionally been retrieved using discontinuous, multi-band sensors; however, continuous hyperspectral sensors are now being developed as an alternative. In this paper, we investigate the performance of various sensor configurations using machine learning neural networks trained on a simulated dataset. Our results show improvements in the accuracy of retrievals of snow grain size and impurity concentration for continuous hyperspectral channel configurations. Retrieval accuracy of snow albedo was found to be similar for all channel configurations. Full article
Show Figures

Figure 1

15 pages, 920 KiB  
Article
Shelved–Retrieved Method for Weakly Balanced Constrained Clustering Problems
by Xinxiang Hou, Andong Qiu, Lu Yang and Zhouwang Yang
Algorithms 2023, 16(10), 492; https://doi.org/10.3390/a16100492 - 23 Oct 2023
Viewed by 1516
Abstract
Clustering problems are prevalent in areas such as transport and partitioning. Owing to the demand for centralized storage and limited resources, a complex variant of this problem has emerged, also referred to as the weakly balanced constrained clustering (WBCC) problem. Clusters must satisfy [...] Read more.
Clustering problems are prevalent in areas such as transport and partitioning. Owing to the demand for centralized storage and limited resources, a complex variant of this problem has emerged, also referred to as the weakly balanced constrained clustering (WBCC) problem. Clusters must satisfy constraints regarding cluster weights and connectivity. However, existing methods fail to guarantee cluster connectivity in diverse scenarios, thereby resulting in additional transportation costs. In response to the aforementioned limitations, this study introduces a shelved–retrieved method. This method embeds adjacent relationships during power diagram construction to ensure cluster connectivity. Using the shelved–retrieved method, connected clusters are generated and iteratively adjusted to determine the optimal solutions. Further, experiments are conducted on three synthetic datasets, each with three objective functions, and the results are compared to those obtained using other techniques. Our method successfully generates clusters that satisfy the constraints imposed by the WBCC problem and consistently outperforms other techniques in terms of the evaluation measures. Full article
Show Figures

Figure 1

63 pages, 3409 KiB  
Review
Survey of Recent Applications of the Chaotic Lozi Map
by René Lozi
Algorithms 2023, 16(10), 491; https://doi.org/10.3390/a16100491 - 22 Oct 2023
Cited by 3 | Viewed by 2826
Abstract
Since its original publication in 1978, Lozi’s chaotic map has been thoroughly explored and continues to be. Hundreds of publications have analyzed its particular structure and applied its properties in many fields (e.g., improvement of physical devices, electrical components such as memristors, cryptography, [...] Read more.
Since its original publication in 1978, Lozi’s chaotic map has been thoroughly explored and continues to be. Hundreds of publications have analyzed its particular structure and applied its properties in many fields (e.g., improvement of physical devices, electrical components such as memristors, cryptography, optimization, evolutionary algorithms, synchronization, control, secure communications, AI with swarm intelligence, chimeras, solitary states, etc.) through algorithms such as the COLM algorithm (Chaotic Optimization algorithm based on Lozi Map), Particle Swarm Optimization (PSO), and Differential Evolution (DE). In this article, we present a survey based on dozens of articles on the use of this map in algorithms aimed at real applications or applications exploring new directions of dynamical systems such as chimeras and solitary states. Full article
(This article belongs to the Special Issue Surveys in Algorithm Analysis and Complexity Theory, Part II)
Show Figures

Figure 1

26 pages, 679 KiB  
Article
Deep Neural Networks Training by Stochastic Quasi-Newton Trust-Region Methods
by Mahsa Yousefi and Ángeles Martínez
Algorithms 2023, 16(10), 490; https://doi.org/10.3390/a16100490 - 20 Oct 2023
Viewed by 1228
Abstract
While first-order methods are popular for solving optimization problems arising in deep learning, they come with some acute deficiencies. To overcome these shortcomings, there has been recent interest in introducing second-order information through quasi-Newton methods that are able to construct Hessian approximations using [...] Read more.
While first-order methods are popular for solving optimization problems arising in deep learning, they come with some acute deficiencies. To overcome these shortcomings, there has been recent interest in introducing second-order information through quasi-Newton methods that are able to construct Hessian approximations using only gradient information. In this work, we study the performance of stochastic quasi-Newton algorithms for training deep neural networks. We consider two well-known quasi-Newton updates, the limited-memory Broyden–Fletcher–Goldfarb–Shanno (BFGS) and the symmetric rank one (SR1). This study fills a gap concerning the real performance of both updates in the minibatch setting and analyzes whether more efficient training can be obtained when using the more robust BFGS update or the cheaper SR1 formula, which—allowing for indefinite Hessian approximations—can potentially help to better navigate the pathological saddle points present in the non-convex loss functions found in deep learning. We present and discuss the results of an extensive experimental study that includes many aspects affecting performance, like batch normalization, the network architecture, the limited memory parameter or the batch size. Our results show that stochastic quasi-Newton algorithms are efficient and, in some instances, able to outperform the well-known first-order Adam optimizer, run with the optimal combination of its numerous hyperparameters, and the stochastic second-order trust-region STORM algorithm. Full article
Show Figures

Figure 1

21 pages, 4480 KiB  
Article
Neural Network-Enhanced Fault Diagnosis of Robot Joints
by Yifan Zhang and Quanmin Zhu
Algorithms 2023, 16(10), 489; https://doi.org/10.3390/a16100489 - 20 Oct 2023
Cited by 1 | Viewed by 1273
Abstract
Industrial robots play an indispensable role in flexible production lines, and the faults caused by degradation of equipment, motors, mechanical system joints, and even task diversity affect the efficiency of production lines and product quality. Aiming to achieve high-precision multiple size of fault [...] Read more.
Industrial robots play an indispensable role in flexible production lines, and the faults caused by degradation of equipment, motors, mechanical system joints, and even task diversity affect the efficiency of production lines and product quality. Aiming to achieve high-precision multiple size of fault diagnosis of robotic arms, this study presents a back propagation (BP) multiclassification neural network-based method for robotic arm fault diagnosis by taking feature fusion of position, attitude and acceleration of UR10 robotic arm end-effector to establish the database for neural network training. The new algorithm achieves an accuracy above 95% for fault diagnosis of each joint, and a diagnostic accuracy of up to 0.1 degree. It should be noted that the fault diagnosis algorithm can detect faults effectively in time, while avoiding complex mathematical operations. Full article
Show Figures

Figure 1

15 pages, 2594 KiB  
Article
Automatic Myocardium Segmentation in Delayed-Enhancement MRI with Pathology-Specific Data Augmentation and Deep Learning Architectures
by Gonzalo E. Mosquera-Rojas, Cylia Ouadah, Azadeh Hadadi, Alain Lalande and Sarah Leclerc
Algorithms 2023, 16(10), 488; https://doi.org/10.3390/a16100488 - 20 Oct 2023
Viewed by 1404
Abstract
The extent of myocardial infarction (MI) can be evaluated thanks to delayed enhancement (DE) cardiac MRI. DE MRI is an imaging technique acquired several minutes after the injection of a contrast agent where MI appears with a bright signal. The automatic myocardium segmentation [...] Read more.
The extent of myocardial infarction (MI) can be evaluated thanks to delayed enhancement (DE) cardiac MRI. DE MRI is an imaging technique acquired several minutes after the injection of a contrast agent where MI appears with a bright signal. The automatic myocardium segmentation in DE MRI is quite challenging, especially when MI is present, since these areas usually showcase a heterogeneous aspect in terms of shape and intensity, thus obstructing the myocardium visibility. To overcome this issue, we propose an image processing-based data augmentation algorithm where diverse synthetic cases of MI were created in two different ways: fixed and adaptive. In the first one, the training set is enlarged by a specific factor, whereas in the second, the method receives feedback from the segmentation model during training and performs the augmentation exclusively on complex cases. The method performance was evaluated in single and multi-modality settings. In this latter, information from kinetic images (Cine MRI), which are acquired along DE MRI in the same examination, is also used, and the extracted features from both modalities are fused. The results show that applying the data augmentation in a fixed fashion on a multi-modality setting leads to a more consistent segmentation of the myocardium in DE MRI. The segmentation models, which were all UNet-based architectures, can better relate MI areas with the myocardium, thus increasing its overall robustness to pathology-specific local pattern perturbations. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Imaging)
Show Figures

Figure 1

13 pages, 8724 KiB  
Article
Cloud Detection and Tracking Based on Object Detection with Convolutional Neural Networks
by Jose Antonio Carballo, Javier Bonilla, Jesús Fernández-Reche, Bijan Nouri, Antonio Avila-Marin, Yann Fabel and Diego-César Alarcón-Padilla
Algorithms 2023, 16(10), 487; https://doi.org/10.3390/a16100487 - 19 Oct 2023
Cited by 3 | Viewed by 1482
Abstract
Due to the need to know the availability of solar resources for the solar renewable technologies in advance, this paper presents a new methodology based on computer vision and the object detection technique that uses convolutional neural networks (EfficientDet-D2 model) to detect clouds [...] Read more.
Due to the need to know the availability of solar resources for the solar renewable technologies in advance, this paper presents a new methodology based on computer vision and the object detection technique that uses convolutional neural networks (EfficientDet-D2 model) to detect clouds in image series. This methodology also calculates the speed and direction of cloud motion, which allows the prediction of transients in the available solar radiation due to clouds. The convolutional neural network model retraining and validation process finished successfully, which gave accurate cloud detection results in the test. Also, during the test, the estimation of the remaining time for a transient due to a cloud was accurate, mainly due to the precise cloud detection and the accuracy of the remaining time algorithm. Full article
(This article belongs to the Special Issue Recent Advances in Algorithms for Computer Vision Applications)
Show Figures

Figure 1

27 pages, 9031 KiB  
Article
Supervised Methods for Modeling Spatiotemporal Glacier Variations by Quantification of the Area and Terminus of Mountain Glaciers Using Remote Sensing
by Edmund Robbins, Thu Thu Hlaing, Jonathan Webb and Nezamoddin N. Kachouie
Algorithms 2023, 16(10), 486; https://doi.org/10.3390/a16100486 - 19 Oct 2023
Viewed by 1150
Abstract
Glaciers are important indictors of climate change as changes in glaciers physical features such as their area is in response to measurable evidence of fluctuating climate factors such as temperature, precipitation, and CO2. Although a general retreat of mountain glacier systems [...] Read more.
Glaciers are important indictors of climate change as changes in glaciers physical features such as their area is in response to measurable evidence of fluctuating climate factors such as temperature, precipitation, and CO2. Although a general retreat of mountain glacier systems has been identified in relation to centennial trends toward warmer temperatures, there is the potential to extract a great deal more information regarding regional variations in climate from the mapping of the time history of the terminus position or surface area of the glaciers. The remote nature of glaciers renders direct measurement impractical on anything other than a local scale. Considering the sheer number of mountain glaciers around the globe, ground measurements of terminus position are only available for a small percentage of glaciers and ground measurements of glacier area are rare. In this project, changes in the terminal point and area of Franz Josef and Gorner glaciers were quantified in response to climate factors using satellite imagery taken by Landsat at regular intervals. Two supervised learning methods including a parametric method (multiple regression) and a nonparametric method (generalized additive model) were implemented to identify climate factors that impact glacier changes. Local temperature, CO2, and precipitation were identified as significant factors for predicting changes in both Franz Josef and Gorner glaciers. Spatiotemporal quantification of glacier change is an essential task to model glacier variations in response to global and local climate factors. This work provided valuable insights on quantification of surface area of glaciers using satellite imagery with potential implementation of a generic approach. Full article
(This article belongs to the Special Issue Supervised and Unsupervised Classification Algorithms (2nd Edition))
Show Figures

Figure 1

15 pages, 3369 KiB  
Article
Representing and Inferring Massive Network Traffic Condition: A Case Study in Nashville, Tennessee
by Hairuilong Zhang, Yangsong Gu and Lee D. Han
Algorithms 2023, 16(10), 485; https://doi.org/10.3390/a16100485 - 19 Oct 2023
Viewed by 1226
Abstract
Intelligent transportation systems (ITSs) usually require monitoring of massive road networks and gathering traffic data at a high spatial and temporal resolution. This leads to the accumulation of substantial data volumes, necessitating the development of more concise data representations. Approaches like principal component [...] Read more.
Intelligent transportation systems (ITSs) usually require monitoring of massive road networks and gathering traffic data at a high spatial and temporal resolution. This leads to the accumulation of substantial data volumes, necessitating the development of more concise data representations. Approaches like principal component analysis (PCA), which operate within subspaces, can construct precise low-dimensional models. However, interpreting these models can be challenging, primarily because the principal components often encompass a multitude of links within the traffic network. To overcome this issue, this study presents a novel approach for representing and indexing network traffic conditions through weighted CUR matrix decomposition integrated with clustering analysis. The proposed approach selects a subset group of detectors from the original network to represent and index traffic condition through a matrix decomposition method, allowing for more efficient management and analysis. The proposed method is evaluated using traffic detector data from the city of Nashville, TN. The results demonstrate that the approach is effective in representing and indexing network traffic conditions, with high accuracy and efficiency. Overall, this study contributes to the field of network traffic monitoring by proposing a novel approach for representing massive traffic networks and exploring the effects of incorporating clustering into CUR decomposition. The proposed approach can help traffic analysts and practitioners to more efficiently manage and analyze traffic conditions, ultimately leading to more effective transportation systems. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Big Data Analysis)
Show Figures

Figure 1

14 pages, 1828 KiB  
Article
TransPCGC: Point Cloud Geometry Compression Based on Transformers
by Shiyu Lu, Huamin Yang and Cheng Han
Algorithms 2023, 16(10), 484; https://doi.org/10.3390/a16100484 - 19 Oct 2023
Viewed by 1641
Abstract
Due to the often substantial size of the real-world point cloud data, efficient transmission and storage have become critical concerns. Point cloud compression plays a decisive role in addressing these challenges. Recognizing the importance of capturing global information within point cloud data for [...] Read more.
Due to the often substantial size of the real-world point cloud data, efficient transmission and storage have become critical concerns. Point cloud compression plays a decisive role in addressing these challenges. Recognizing the importance of capturing global information within point cloud data for effective compression, many existing point cloud compression methods overlook this crucial aspect. To tackle this oversight, we propose an innovative end-to-end point cloud compression method designed to extract both global and local information. Our method includes a novel Transformer module to extract rich features from the point cloud. Utilization of a pooling operation that requires no learnable parameters as a token mixer for computing long-distance dependencies ensures global feature extraction while significantly reducing both computations and parameters. Furthermore, we employ convolutional layers for feature extraction. These layers not only preserve the spatial structure of the point cloud, but also offer the advantage of parameter independence from the input point cloud size, resulting in a substantial reduction in parameters. Our experimental results demonstrate the effectiveness of the proposed TransPCGC network. It achieves average Bjontegaard Delta Rate (BD-Rate) gains of 85.79% and 80.24% compared to Geometry-based Point Cloud Compression (G-PCC). Additionally, in comparison to the Learned-PCGC network, our approach attains an average BD-Rate gain of 18.26% and 13.83%. Moreover, it is accompanied by a 16% reduction in encoding and decoding time, along with a 50% reduction in model size. Full article
(This article belongs to the Special Issue Digital Signal Processing Algorithms and Applications)
Show Figures

Graphical abstract

12 pages, 17141 KiB  
Article
Development of a Mammography Calcification Detection Algorithm Using Deep Learning with Resolution-Preserved Image Patch Division
by Miu Sakaida, Takaaki Yoshimura, Minghui Tang, Shota Ichikawa and Hiroyuki Sugimori
Algorithms 2023, 16(10), 483; https://doi.org/10.3390/a16100483 - 18 Oct 2023
Cited by 1 | Viewed by 1550
Abstract
Convolutional neural networks (CNNs) in deep learning have input pixel limitations, which leads to lost information regarding microcalcification when mammography images are compressed. Segmenting images into patches retains the original resolution when inputting them into the CNN and allows for identifying the location [...] Read more.
Convolutional neural networks (CNNs) in deep learning have input pixel limitations, which leads to lost information regarding microcalcification when mammography images are compressed. Segmenting images into patches retains the original resolution when inputting them into the CNN and allows for identifying the location of calcification. This study aimed to develop a mammographic calcification detection method using deep learning by classifying the presence of calcification in the breast. Using publicly available data, 212 mammograms from 81 women were segmented into 224 × 224-pixel patches, producing 15,049 patches. These were visually classified for calcification and divided into five subsets for training and evaluation using fivefold cross-validation, ensuring image consistency. ResNet18, ResNet50, and ResNet101 were used for training, each of which created a two-class calcification classifier. The ResNet18 classifier achieved an overall accuracy of 96.0%, mammogram accuracy of 95.8%, an area under the curve (AUC) of 0.96, and a processing time of 0.07 s. The results of ResNet50 indicated 96.4% overall accuracy, 96.3% mammogram accuracy, an AUC of 0.96, and a processing time of 0.14 s. The results of ResNet101 indicated 96.3% overall accuracy, 96.1% mammogram accuracy, an AUC of 0.96, and a processing time of 0.20 s. This developed method offers quick, accurate calcification classification and efficient visualization of calcification locations. Full article
(This article belongs to the Special Issue Artificial Intelligence for Medical Imaging)
Show Figures

Figure 1

21 pages, 539 KiB  
Article
SmartBuild RecSys: A Recommendation System Based on the Smart Readiness Indicator for Energy Efficiency in Buildings
by Muhammad Talha Siddique, Paraskevas Koukaras, Dimosthenis Ioannidis and Christos Tjortjis
Algorithms 2023, 16(10), 482; https://doi.org/10.3390/a16100482 - 17 Oct 2023
Viewed by 1363
Abstract
The Smart Readiness Indicator (SRI) is a newly developed framework that measures a building’s technological readiness to improve its energy efficiency. The integration of data obtained from this framework with data derived from Building Information Modeling (BIM) has the potential to yield compelling [...] Read more.
The Smart Readiness Indicator (SRI) is a newly developed framework that measures a building’s technological readiness to improve its energy efficiency. The integration of data obtained from this framework with data derived from Building Information Modeling (BIM) has the potential to yield compelling results. This research proposes an algorithm for a Recommendation System (RS) that uses SRI and BIM data to advise on building energy-efficiency improvements. Following a modular programming approach, the proposed system is split into two algorithmic approaches linked with two distinct use cases. In the first use case, BIM data are utilized to provide thermal envelope enhancement recommendations. A hybrid Machine Learning (ML) (Random Forest–Decision Tree) algorithm is trained using an Industry Foundation Class (IFC) BIM model of CERTH’S nZEB Smart Home in Greece and Passive House database data. In the second use case, SRI data are utilized to develop an RS for Heating, Ventilation, and Air Conditioning (HVAC) system improvement, in which a process utilizes a filtering function and KNN algorithm to suggest automation levels for building service improvements. Considering the results from both use cases, this paper provides a solid framework that exploits more possibilities for coupling SRI with BIM data. It presents a novel algorithm that exploits these data to facilitate the development of an RS system for increasing building energy efficiency. Full article
(This article belongs to the Special Issue Self-Learning and Self-Adapting Algorithms in Machine Learning)
Show Figures

Figure 1

18 pages, 7313 KiB  
Article
FenceTalk: Exploring False Negatives in Moving Object Detection
by Yun-Wei Lin, Yuh-Hwan Liu, Yi-Bing Lin and Jian-Chang Hong
Algorithms 2023, 16(10), 481; https://doi.org/10.3390/a16100481 - 17 Oct 2023
Viewed by 1368
Abstract
Deep learning models are often trained with a large amount of labeled data to improve the accuracy for moving object detection in new fields. However, the model may not be robust enough due to insufficient training data in the new field, resulting in [...] Read more.
Deep learning models are often trained with a large amount of labeled data to improve the accuracy for moving object detection in new fields. However, the model may not be robust enough due to insufficient training data in the new field, resulting in some moving objects not being successfully detected. Training with data that is not successfully detected by the pre-trained deep learning model can effectively improve the accuracy for the new field, but it is costly to retrieve the image data containing the moving objects from millions of images per day to train the model. Therefore, we propose FenceTalk, a moving object detection system, which compares the difference between the current frame and the background image based on the structural similarity index measure (SSIM). FenceTalk automatically selects suspicious images with moving objects that are not successfully detected by the Yolo model, so that the training data can be selected at a lower labor cost. FenceTalk can effectively define and update the background image in the field, reducing the misjudgment caused by changes in light and shadow, and selecting images containing moving objects with an optimal threshold. Our study has demonstrated its performance and generality using real data from different fields. For example, compared with the pre-trained Yolo model using the MS COCO dataset, the overall recall of FenceTalk increased from 72.36% to 98.39% for the model trained with the data picked out by SSIM. The recall of FenceTalk, combined with Yolo and SSIM, can reach more than 99%. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition)
Show Figures

Figure 1

14 pages, 2367 KiB  
Article
The Iterative Exclusion of Compatible Samples Workflow for Multi-SNP Analysis in Complex Diseases
by Wei Xu, Xunhong Zhu, Liping Zhang and Jun Gao
Algorithms 2023, 16(10), 480; https://doi.org/10.3390/a16100480 - 16 Oct 2023
Viewed by 1242
Abstract
Complex diseases are affected by various factors, and single-nucleotide polymorphisms (SNPs) are the basis for their susceptibility by affecting protein structure and gene expression. Complex diseases often arise from the interactions of multiple SNPs and are investigated using epistasis detection algorithms. Nevertheless, the [...] Read more.
Complex diseases are affected by various factors, and single-nucleotide polymorphisms (SNPs) are the basis for their susceptibility by affecting protein structure and gene expression. Complex diseases often arise from the interactions of multiple SNPs and are investigated using epistasis detection algorithms. Nevertheless, the computational burden associated with the “combination explosion” hinders these algorithms’ ability to detect these interactions. To perform multi-SNP analysis in complex diseases, the iterative exclusion of compatible samples (IECS) workflow is proposed in this work. In the IECS workflow, qualitative comparative analysis (QCA) is firstly employed as the calculation engine to calculate the solution; secondly, the pattern is extracted from the prime implicants with the greatest raw coverage in the solution; then, the pattern is tested with the chi-square test in the source dataset; finally, all compatible samples are excluded from the current dataset. This process is repeated until the QCA calculation has no solution or reaches the iteration threshold. The workflow was applied to analyze simulated datasets and the Alzheimer’s disease dataset, and its performance was compared with that of the BOOST and MDR algorithms. The findings illustrated that IECS exhibits greater power with less computation and can be applied to perform multi-SNP analysis in complex diseases. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

16 pages, 868 KiB  
Article
Problem-Driven Scenario Generation for Stochastic Programming Problems: A Survey
by Xiaochen Chou and Enza Messina
Algorithms 2023, 16(10), 479; https://doi.org/10.3390/a16100479 - 13 Oct 2023
Viewed by 1701
Abstract
Stochastic Programming is a powerful framework that addresses decision-making under uncertainties, which is a frequent occurrence in real-world problems. To effectively solve Stochastic Programming problems, scenario generation is one of the common practices that organizes realizations of stochastic processes with finite discrete distributions, [...] Read more.
Stochastic Programming is a powerful framework that addresses decision-making under uncertainties, which is a frequent occurrence in real-world problems. To effectively solve Stochastic Programming problems, scenario generation is one of the common practices that organizes realizations of stochastic processes with finite discrete distributions, which enables the use of mathematical programming models of the original problem. The quality of solutions is significantly influenced by the scenarios employed, necessitating a delicate balance between incorporating informative scenarios and preventing overfitting. Distributions-based scenario generation methodologies have been extensively studied over time, while a relatively recent concept of problem-driven scenario generation has emerged, aiming to incorporate the underlying problem’s structure during the scenario generation process. This survey explores recent literature on problem-driven scenario generation algorithms and methodologies. The investigation aims to identify circumstances under which this approach is effective and efficient. The work provides a comprehensive categorization of existing literature, supplemented by illustrative examples. Additionally, the survey examines potential applications and discusses avenues for its integration with machine learning technologies. By shedding light on the effectiveness of problem-driven scenario generation and its potential for synergistic integration with machine learning, this survey contributes to enhanced decision-making strategies in the context of uncertainties. Full article
Show Figures

Figure 1

15 pages, 1395 KiB  
Article
Evolutionary Approaches for Adversarial Attacks on Neural Source Code Classifiers
by Valeria Mercuri, Martina Saletta and Claudio Ferretti
Algorithms 2023, 16(10), 478; https://doi.org/10.3390/a16100478 - 12 Oct 2023
Viewed by 1291
Abstract
As the prevalence and sophistication of cyber threats continue to increase, the development of robust vulnerability detection techniques becomes paramount in ensuring the security of computer systems. Neural models have demonstrated significant potential in identifying vulnerabilities; however, they are not immune to adversarial [...] Read more.
As the prevalence and sophistication of cyber threats continue to increase, the development of robust vulnerability detection techniques becomes paramount in ensuring the security of computer systems. Neural models have demonstrated significant potential in identifying vulnerabilities; however, they are not immune to adversarial attacks. This paper presents a set of evolutionary techniques for generating adversarial instances to enhance the resilience of neural models used for vulnerability detection. The proposed approaches leverage an evolution strategy (ES) algorithm that utilizes as the fitness function the output of the neural network to deceive. By starting from existing instances, the algorithm evolves individuals, represented by source code snippets, by applying semantic-preserving transformations, while utilizing the fitness to invert their original classification. This iterative process facilitates the generation of adversarial instances that can mislead the vulnerability detection models while maintaining the original behavior of the source code. The significance of this research lies in its contribution to the field of cybersecurity by addressing the need for enhanced resilience against adversarial attacks in vulnerability detection models. The evolutionary approach provides a systematic framework for generating adversarial instances, allowing for the identification and mitigation of weaknesses in AI classifiers. Full article
Show Figures

Figure 1

20 pages, 7880 KiB  
Article
eNightTrack: Restraint-Free Depth-Camera-Based Surveillance and Alarm System for Fall Prevention Using Deep Learning Tracking
by Ye-Jiao Mao, Andy Yiu-Chau Tam, Queenie Tsung-Kwan Shea, Yong-Ping Zheng and James Chung-Wai Cheung
Algorithms 2023, 16(10), 477; https://doi.org/10.3390/a16100477 - 12 Oct 2023
Cited by 1 | Viewed by 1791
Abstract
Falls are a major problem in hospitals, and physical or chemical restraints are commonly used to “protect” patients in hospitals and service users in hostels, especially elderly patients with dementia. However, physical and chemical restraints may be unethical, detrimental to mental health and [...] Read more.
Falls are a major problem in hospitals, and physical or chemical restraints are commonly used to “protect” patients in hospitals and service users in hostels, especially elderly patients with dementia. However, physical and chemical restraints may be unethical, detrimental to mental health and associated with negative side effects. Building upon our previous development of the wandering behavior monitoring system “eNightLog”, we aimed to develop a non-contract restraint-free multi-depth camera system, “eNightTrack”, by incorporating a deep learning tracking algorithm to identify and notify about fall risks. Our system evaluated 20 scenarios, with a total of 307 video fragments, and consisted of four steps: data preparation, instance segmentation with customized YOLOv8 model, head tracking with MOT (Multi-Object Tracking) techniques, and alarm identification. Our system demonstrated a sensitivity of 96.8% with 5 missed warnings out of 154 cases. The eNightTrack system was robust to the interference of medical staff conducting clinical care in the region, as well as different bed heights. Future research should take in more information to improve accuracy while ensuring lower computational costs to enable real-time applications. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms for Healthcare)
Show Figures

Figure 1

18 pages, 7443 KiB  
Article
IGLOO: An Iterative Global Exploration and Local Optimization Algorithm to Find Diverse Low-Energy Conformations of Flexible Molecules
by William Margerit, Antoine Charpentier, Cathy Maugis-Rabusseau, Johann Christian Schön, Nathalie Tarrat and Juan Cortés
Algorithms 2023, 16(10), 476; https://doi.org/10.3390/a16100476 - 12 Oct 2023
Cited by 1 | Viewed by 1402
Abstract
The exploration of the energy landscape of a chemical system is essential for understanding and predicting its observable properties. In most cases, this is a challenging task due to the high complexity of such landscapes, which often consist of multiple, possibly hierarchical basins [...] Read more.
The exploration of the energy landscape of a chemical system is essential for understanding and predicting its observable properties. In most cases, this is a challenging task due to the high complexity of such landscapes, which often consist of multiple, possibly hierarchical basins that are difficult to locate and thoroughly explore. In this study, we introduce a novel method, called IGLOO (Iterative Global Exploration and Local Optimization), which aims to achieve a more efficient global exploration of the conformational space compared to existing techniques. The method utilizes a tree-based exploration inspired by the Rapidly exploring Random Tree (RRT) algorithm originating from robotics. IGLOO dynamically adjusts its exploration strategy to both homogeneously scan the landscape and focus on promising regions, avoiding redundant exploration. We evaluated IGLOO using models of two polypeptides and compared its performance to the traditional basin-hopping method and a hybrid method that also incorporates the RRT algorithm. We find that IGLOO outperforms both alternative methods in terms of efficiently and comprehensively exploring the molecular conformational space. This approach can be easily generalized to other chemical systems such as molecules on surfaces or crystalline systems. Full article
(This article belongs to the Collection Feature Paper in Metaheuristic Algorithms and Applications)
Show Figures

Graphical abstract

22 pages, 4936 KiB  
Article
A Distributed Autonomous Mission Planning Method for the Low-Orbit Imaging Constellation
by Qing Yang, Bingyu Song, Yingguo Chen, Lei He and Pei Wang
Algorithms 2023, 16(10), 475; https://doi.org/10.3390/a16100475 - 11 Oct 2023
Cited by 1 | Viewed by 1161
Abstract
With the improvement of satellite autonomy, multi-satellite cooperative mission planning has become an important application. This requires multiple satellites to interact with each other via inter-satellite links to reach a consistent mission planning scheme. Considering issues such as inter-satellite link failure, external interference, [...] Read more.
With the improvement of satellite autonomy, multi-satellite cooperative mission planning has become an important application. This requires multiple satellites to interact with each other via inter-satellite links to reach a consistent mission planning scheme. Considering issues such as inter-satellite link failure, external interference, and communication delay, algorithms should minimize communication costs as much as possible. The CBBA algorithm belongs to a fully distributed multi-agent task allocation algorithm, which has been introduced into multi-satellite autonomous task planning scenarios and achieved good planning results. This paper mainly focuses on the communication problem, and proposes an improved algorithm based on it, which is called c-CBBA. The algorithm is designed with task preemption strategy and single-chain strategy to reduce the communication volume. The task preemption strategy is an accelerated convergence mechanism designed for the convergence characteristics of CBBA, while the single-chain strategy is a communication link pruning strategy designed for the information exchange characteristics of satellites. Experiments in various scenarios show that the algorithm can effectively reduce communication volume while ensuring the effectiveness of task planning. Full article
Show Figures

Figure 1

24 pages, 972 KiB  
Review
Emerging 6G/B6G Wireless Communication for the Power Infrastructure in Smart Cities: Innovations, Challenges, and Future Perspectives
by Ahmed Al Amin, Junho Hong, Van-Hai Bui and Wencong Su
Algorithms 2023, 16(10), 474; https://doi.org/10.3390/a16100474 - 09 Oct 2023
Cited by 3 | Viewed by 1872
Abstract
A well-functioning smart grid is an essential part of an efficient and uninterrupted power supply for the key enablers of smart cities. To effectively manage the operations of a smart grid, there is an essential requirement for a seamless wireless communication system that [...] Read more.
A well-functioning smart grid is an essential part of an efficient and uninterrupted power supply for the key enablers of smart cities. To effectively manage the operations of a smart grid, there is an essential requirement for a seamless wireless communication system that provides high data rates, reliability, flexibility, massive connectivity, low latency, security, and adaptability to changing needs. A contemporary review of the utilization of emerging 6G wireless communication for the major applications of smart grids, especially in terms of massive connectivity and monitoring, secured communication for operation and resource management, and time-critical operations, are presented in this paper. This article starts with the key enablers of the smart city, along with the necessity of the smart grid for the key enablers of it. The fundamentals of the smart city, smart grid, and 6G wireless communication are also introduced in this paper. Moreover, the motivations to integrate 6G wireless communication with the smart grid system are expressed in this article as well. The relevant literature overview, along with the novelty of this paper, is depicted to bridge the gap of the current research works. We describe the novel technologies of 6G wireless communication to effectively perform the considered smart grid applications. Novel technologies of 6G wireless communication have significantly improved the key performance indicators compared to the prior generation of the wireless communication system. A significant part of this article is the contemporary survey of the considered major applications of a smart grid that is served by 6G. In addition, the anticipated challenges and interesting future research pathways are also discussed explicitly in this article. This article serves as a valuable resource for understanding the potential of 6G wireless communication in advancing smart grid applications and addressing emerging challenges. Full article
Show Figures

Figure 1

15 pages, 699 KiB  
Article
Multiprocessor Fair Scheduling Based on an Improved Slime Mold Algorithm
by Manli Dai and Zhongyi Jiang
Algorithms 2023, 16(10), 473; https://doi.org/10.3390/a16100473 - 07 Oct 2023
Cited by 1 | Viewed by 1561
Abstract
An improved slime mold algorithm (IMSMA) is presented in this paper for a multiprocessor multitask fair scheduling problem, which aims to reduce the average processing time. An initial population strategy based on Bernoulli mapping reverse learning is proposed for the slime mold algorithm. [...] Read more.
An improved slime mold algorithm (IMSMA) is presented in this paper for a multiprocessor multitask fair scheduling problem, which aims to reduce the average processing time. An initial population strategy based on Bernoulli mapping reverse learning is proposed for the slime mold algorithm. A Cauchy mutation strategy is employed to escape local optima, and the boundary-check mechanism of the slime mold swarm is optimized. The boundary conditions of the slime mold population are transformed into nonlinear, dynamically changing boundaries. This adjustment strengthens the slime mold algorithm’s global search capabilities in early iterations and strengthens its local search capability in later iterations, which accelerates the algorithm’s convergence speed. Two unimodal and two multimodal test functions from the CEC2019 benchmark are chosen for comparative experiments. The experiment results show the algorithm’s robust convergence and its capacity to escape local optima. The improved slime mold algorithm is applied to the multiprocessor fair scheduling problem to reduce the average execution time on each processor. Numerical experiments showed that the IMSMA performs better than other algorithms in terms of precision and convergence effectiveness. Full article
(This article belongs to the Special Issue Scheduling Theory and Algorithms for Sustainable Manufacturing)
Show Figures

Figure 1

16 pages, 4506 KiB  
Article
Blockchain PoS and PoW Consensus Algorithms for Airspace Management Application to the UAS-S4 Ehécatl
by Seyed Mohammad Hashemi, Ruxandra Mihaela Botez and Georges Ghazi
Algorithms 2023, 16(10), 472; https://doi.org/10.3390/a16100472 - 07 Oct 2023
Viewed by 1335
Abstract
This paper introduces an innovative consensus algorithm for managing Unmanned Aircraft System Traffic (UTM) through blockchain technology, a highly secure consensus protocol, to allocate airspace. A smart contract was developed on the Ethereum blockchain for allocating airspace. This technique enables the division of [...] Read more.
This paper introduces an innovative consensus algorithm for managing Unmanned Aircraft System Traffic (UTM) through blockchain technology, a highly secure consensus protocol, to allocate airspace. A smart contract was developed on the Ethereum blockchain for allocating airspace. This technique enables the division of the swarm flight zone into smaller sectors to decrease the computational complexity of the algorithm. A decentralized voting system was established within these segmented flight zones, utilizing two primary methodologies: Proof of Work (PoW) and Proof of Stake (PoS). By employing 1000 UAS-S4s across various locations and heading angles, a swarm flight zone was generated. The efficiency of the devised decentralized consensus system was assessed based on error rate and validation time. Despite PoS displaying greater efficiency in cumulative probability for block execution, the comparative analysis indicated PoW outperformed PoS concerning the potential for conflicts among UASs. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

18 pages, 335 KiB  
Article
Computation of the Hausdorff Distance between Two Compact Convex Sets
by Kenneth Lange
Algorithms 2023, 16(10), 471; https://doi.org/10.3390/a16100471 - 06 Oct 2023
Cited by 1 | Viewed by 1584
Abstract
The Hausdorff distance between two closed sets has important theoretical and practical applications. Yet apart from finite point clouds, there appear to be no generic algorithms for computing this quantity. Because many infinite sets are defined by algebraic equalities and inequalities, this a [...] Read more.
The Hausdorff distance between two closed sets has important theoretical and practical applications. Yet apart from finite point clouds, there appear to be no generic algorithms for computing this quantity. Because many infinite sets are defined by algebraic equalities and inequalities, this a huge gap. The current paper constructs Frank–Wolfe and projected gradient ascent algorithms for computing the Hausdorff distance between two compact convex sets. Although these algorithms are guaranteed to go uphill, they can become trapped by local maxima. To avoid this defect, we investigate a homotopy method that gradually deforms two balls into the two target sets. The Frank–Wolfe and projected gradient algorithms are tested on two pairs (A,B) of compact convex sets, where: (1) A is the box [1,1] translated by 1 and B is the intersection of the unit ball and the non-negative orthant; and (2) A is the probability simplex and B is the 1 unit ball translated by 1. For problem (2), we find the Hausdorff distance analytically. Projected gradient ascent is more reliable than the Frank–Wolfe algorithm and finds the exact solution of problem (2). Homotopy improves the performance of both algorithms when the exact solution is unknown or unattained. Full article
14 pages, 303 KiB  
Article
On Enhancement of Text Classification and Analysis of Text Emotions Using Graph Machine Learning and Ensemble Learning Methods on Non-English Datasets
by Fatemeh Gholami, Zahed Rahmati, Alireza Mofidi and Mostafa Abbaszadeh
Algorithms 2023, 16(10), 470; https://doi.org/10.3390/a16100470 - 04 Oct 2023
Viewed by 1688
Abstract
In recent years, machine learning approaches, in particular graph learning methods, have achieved great results in the field of natural language processing, in particular text classification tasks. However, many of such models have shown limited generalization on datasets in different languages. In this [...] Read more.
In recent years, machine learning approaches, in particular graph learning methods, have achieved great results in the field of natural language processing, in particular text classification tasks. However, many of such models have shown limited generalization on datasets in different languages. In this research, we investigate and elaborate graph machine learning methods on non-English datasets (such as the Persian Digikala dataset), which consists of users’ opinions for the task of text classification. More specifically, we investigate different combinations of (Pars) BERT with various graph neural network (GNN) architectures (such as GCN, GAT, and GIN) as well as use ensemble learning methods in order to tackle the text classification task on certain well-known non-English datasets. Our analysis and results demonstrate how applying GNN models helps in achieving good scores on the task of text classification by better capturing the topological information between textual data. Additionally, our experiments show how models employing language-specific pre-trained models (like ParsBERT, instead of BERT) capture better information about the data, resulting in better accuracies. Full article
Show Figures

Figure 1

17 pages, 11933 KiB  
Article
Manifold Explorer: Satellite Image Labelling and Clustering Tool with Using Deep Convolutional Autoencoders
by Tulsi Patel, Mark W. Jones and Thomas Redfern
Algorithms 2023, 16(10), 469; https://doi.org/10.3390/a16100469 - 04 Oct 2023
Viewed by 1359
Abstract
We present a novel approach to providing greater insight into the characteristics of an unlabelled dataset, increasing the efficiency with which labelled datasets can be created. We leverage dimension-reduction techniques in combination with autoencoders to create an efficient feature representation for image tiles [...] Read more.
We present a novel approach to providing greater insight into the characteristics of an unlabelled dataset, increasing the efficiency with which labelled datasets can be created. We leverage dimension-reduction techniques in combination with autoencoders to create an efficient feature representation for image tiles derived from remote sensing satellite imagery. The proposed methodology consists of two main stages. Firstly, an autoencoder network is utilised to reduce the high-dimensional image tile data into a compact and expressive latentfeature representation. Subsequently, features are further reduced to a two-dimensional embedding space using the manifold learning algorithm Uniform Manifold Approximation and Projection (UMAP) and t-distributed Stochastic Neighbour Embedding (t-SNE). This step enables the visualization of the image tile clusters in a 2D plot, providing an intuitive and interactive representation that can be used to aid rapid and geographically distributed image labelling. To facilitate the labelling process, our approach allows users to interact with the 2D visualization and label clusters based on their domain knowledge. In cases where certain classes are not effectively separated, users can re-apply dimension reduction to interactively refine subsets of clusters and achieve better class separation, enabling a comprehensively labelled dataset. We evaluate the proposed approach on real-world remote sensing satellite image datasets and demonstrate its effectiveness in achieving accurate and efficient image tile clustering and labelling. Users actively participate in the labelling process through our interactive approach, leading to enhanced relevance of the labelled data, by allowing domain experts to contribute their expertise and enrich the dataset for improved downstream analysis and applications. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Sensor Data and Image Understanding)
Show Figures

Figure 1

21 pages, 3387 KiB  
Article
A Decision-Making Model to Determine Dynamic Facility Locations for a Disaster Logistic Planning Problem Using Deep Learning
by Lili Tanti, Syahril Efendi, Maya Silvi Lydia and Herman Mawengkang
Algorithms 2023, 16(10), 468; https://doi.org/10.3390/a16100468 - 04 Oct 2023
Viewed by 1320
Abstract
Disaster logistics management is vital in planning and organizing humanitarian assistance distribution. The planning problem faces challenges, such as coordinating the allocation and distribution of essential resources while considering the severity of the disaster, population density, and accessibility. This study proposes an optimized [...] Read more.
Disaster logistics management is vital in planning and organizing humanitarian assistance distribution. The planning problem faces challenges, such as coordinating the allocation and distribution of essential resources while considering the severity of the disaster, population density, and accessibility. This study proposes an optimized disaster relief management model, including distribution center placement, demand point prediction, prohibited route mapping, and efficient relief goods distribution. A dynamic model predicts the location of post-disaster distribution centers using the K-Means method based on impacted demand points’ positions. Artificial Neural Networks (ANN) aid in predicting assistance requests around formed distribution centers. The forbidden route model maps permitted and prohibited routes while considering constraints to enhance relief supply distribution efficacy. The objective function aims to minimize both cost and time in post-disaster aid distribution. The model deep location routing problem (DLRP) effectively handles mixed nonlinear multi-objective programming, choosing the best forbidden routes. The combination of these models provides a comprehensive framework for optimizing disaster relief management, resulting in more effective and responsive disaster handling. Numerical examples show the model’s effectiveness in solving complex humanitarian logistics problems with lower computation time, which is crucial for quick decision making during disasters. Full article
Show Figures

Figure 1

28 pages, 359 KiB  
Review
A Survey of Sequential Pattern Based E-Commerce Recommendation Systems
by Christie I. Ezeife and Hemni Karlapalepu
Algorithms 2023, 16(10), 467; https://doi.org/10.3390/a16100467 - 03 Oct 2023
Viewed by 1732
Abstract
E-commerce recommendation systems usually deal with massive customer sequential databases, such as historical purchase or click stream sequences. Recommendation systems’ accuracy can be improved if complex sequential patterns of user purchase behavior are learned by integrating sequential patterns of customer clicks and/or purchases [...] Read more.
E-commerce recommendation systems usually deal with massive customer sequential databases, such as historical purchase or click stream sequences. Recommendation systems’ accuracy can be improved if complex sequential patterns of user purchase behavior are learned by integrating sequential patterns of customer clicks and/or purchases into the user–item rating matrix input of collaborative filtering. This review focuses on algorithms of existing E-commerce recommendation systems that are sequential pattern-based. It provides a comprehensive and comparative performance analysis of these systems, exposing their methodologies, achievements, limitations, and potential for solving more important problems in this domain. The review shows that integrating sequential pattern mining of historical purchase and/or click sequences into a user–item matrix for collaborative filtering can (i) improve recommendation accuracy, (ii) reduce user–item rating data sparsity, (iii) increase the novelty rate of recommendations, and (iv) improve the scalability of recommendation systems. Full article
(This article belongs to the Special Issue New Trends in Algorithms for Intelligent Recommendation Systems)
Show Figures

Graphical abstract

24 pages, 7769 KiB  
Article
Anomaly Detection for Skin Lesion Images Using Convolutional Neural Network and Injection of Handcrafted Features: A Method That Bypasses the Preprocessing of Dermoscopic Images
by Flavia Grignaffini, Maurizio Troiano, Francesco Barbuto, Patrizio Simeoni, Fabio Mangini, Gabriele D’Andrea, Lorenzo Piazzo, Carmen Cantisani, Noah Musolff, Costantino Ricciuti and Fabrizio Frezza
Algorithms 2023, 16(10), 466; https://doi.org/10.3390/a16100466 - 02 Oct 2023
Cited by 3 | Viewed by 2258
Abstract
Skin cancer (SC) is one of the most common cancers in the world and is a leading cause of death in humans. Melanoma (M) is the most aggressive form of skin cancer and has an increasing incidence rate. Early and accurate diagnosis of [...] Read more.
Skin cancer (SC) is one of the most common cancers in the world and is a leading cause of death in humans. Melanoma (M) is the most aggressive form of skin cancer and has an increasing incidence rate. Early and accurate diagnosis of M is critical to increase patient survival rates; however, its clinical evaluation is limited by the long timelines, variety of interpretations, and difficulty in distinguishing it from nevi (N) because of striking similarities. To overcome these problems and to support dermatologists, several machine-learning (ML) and deep-learning (DL) approaches have been developed. In the proposed work, melanoma detection, understood as an anomaly detection task with respect to the normal condition consisting of nevi, is performed with the help of a convolutional neural network (CNN) along with the handcrafted texture features of the dermoscopic images as additional input in the training phase. The aim is to evaluate whether the preprocessing and segmentation steps of dermoscopic images can be bypassed while maintaining high classification performance. Network training is performed on the ISIC2018 and ISIC2019 datasets, from which only melanomas and nevi are considered. The proposed network is compared with the most widely used pre-trained networks in the field of dermatology and shows better results in terms of classification and computational cost. It is also tested on the ISIC2016 dataset to provide a comparison with the literature: it achieves high performance in terms of accuracy, sensitivity, and specificity. Full article
(This article belongs to the Special Issue Deep Learning for Anomaly Detection)
Show Figures

Figure 1

14 pages, 1176 KiB  
Article
Exploring Graph and Digraph Persistence
by Mattia G. Bergomi and Massimo Ferri
Algorithms 2023, 16(10), 465; https://doi.org/10.3390/a16100465 - 02 Oct 2023
Viewed by 1156
Abstract
Among the various generalizations of persistent topology, that based on rank functions and leading to indexing-aware functions appears to be particularly suited to catching graph-theoretical properties without the need for a simplicial construction and a homology computation. This paper defines and studies “simple” [...] Read more.
Among the various generalizations of persistent topology, that based on rank functions and leading to indexing-aware functions appears to be particularly suited to catching graph-theoretical properties without the need for a simplicial construction and a homology computation. This paper defines and studies “simple” and “single-vertex” features in directed and undirected graphs, through which several indexing-aware persistence functions are produced, within the scheme of steady and ranging sets. The implementation of the “sink” feature and its application to trust networks provide an example of the ease of use and meaningfulness of the method. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop