Next Article in Journal
Transfer Learning and Analogical Inference: A Critical Comparison of Algorithms, Methods, and Applications
Previous Article in Journal
Expansion Lemma—Variations and Applications to Polynomial-Time Preprocessing
Previous Article in Special Issue
A Multinomial DGA Classifier for Incipient Fault Detection in Oil-Impregnated Power Transformers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Special Issue on Supervised and Unsupervised Classification Algorithms—Foreword from Guest Editors

by
Laura Antonelli
1,* and
Mario Rosario Guarracino
1,2
1
Consiglio Nazionale delle Ricerche, Institute for High-Performance Computing and Networking, Via Pietro Castellino, 111, I-80131 Naples, Italy
2
Department of Economics and Law, University of Cassino and Southern Lazio, Viale dell’Università, Loc. Folcara, I-03043 Cassino, Italy
*
Author to whom correspondence should be addressed.
Algorithms 2023, 16(3), 145; https://doi.org/10.3390/a16030145
Submission received: 10 February 2023 / Accepted: 3 March 2023 / Published: 7 March 2023
(This article belongs to the Special Issue Supervised and Unsupervised Classification Algorithms)

1. Introduction

Supervised and unsupervised classification algorithms are the two main branches of machine learning. Supervised classification refers to training a system using labeled data being divided into classes, and assigning data to these existing classes. The process consists of computing a model from a set of labeled training data and then applying the model to predict the class label for incoming unlabeled data. This is called supervised learning because the labels are provided by a supervisor, so they are assumed to be correct. Regression is a generalization of classification where the class label is a continuous variable.
In unsupervised classification, the data are unlabeled; therefore, with a lack of prior knowledge, the algorithm tries to search for similarities between points in the dataset. Unsupervised classification algorithms include, but are not limited to, clustering, data estimation, outlier detection, and dimensionality reductions.
Applications range from object detection from biomedical images and disease prediction to natural language understanding and generation.
The Special Issue aims to collect several high-quality research papers on how supervised and unsupervised classification algorithms were modified to overcome the limits of the standard ones, or were explicitly designed to efficiently solve real-life problems. Here, we present some details of the papers’ contents, and we suggest that the reader select interesting articles and explore their findings.
In [1], the authors present KosaNet, a multinomial classification algorithm based on decision trees for analyzing dissolved gases in oil-impregnated power transformers. The transformers are part of the electrical network equipment, and are expensive and responsible for the overall providers’ network reliability and operability. Tests performed on real data provided by the utility power company Kenia Power Ltd show that KosaNet outperforms the state-of-the-art classifiers, especially when dealing with multinomial data.
In [2], a novel temporal symbolic regression method is developed to solve the air-quality modeling and forecasting problems. An actual air-quality database was used for several tests containing measurements from 2015 to 2017 of the Polish city of Wroclaw concerning traffic volume, meteorological parameters, and pollution data. The experimental results show that the proposed strategy has superior statistical performances compared to classical approaches, ranging from simple linear regression methods to recurrent neural networks.
The study in [3] analyzes the effects of nonlinearity on the performance of deep learning methods using the well-known activation functions ReLU and L-ReLU. Several tests were performed on MNIST databases, using different data domains as the network input. The authors empirically showed that ReLU is more effective than L-ReLU when the neural network uses sufficient training parameters. Instead, L-ReLU outperforms the accuracy of ReLU in classification problems. They showed that the information loss due to the activation functions could be evaluated through the entropy function, providing a measure of randomness or disorders in neural network information flow.
A new Fuzzy C-Means (FCM) algorithm is proposed in [4], using multiple fuzzification coefficients to enhance the performance of the basic formulation. The fuzzification coefficient is the exponential parameter of the fuzzy membership functions in the element-cluster squared distances of the objective function to be minimized. Usually, there are no criteria for selecting the coefficient, and it is chosen to be the same for all elements after several tests. Here, the key idea is to evaluate the coefficient based on the surrounding elements’ concentration. High concentration means a high probability of creating a cluster: therefore, a small fuzzification coefficient is selected for faster convergence. Instead, a large coefficient will be chosen for low concentrations to increase the probability of a cluster selection. The experimental tests run on several datasets from UCR showed the new algorithm formulation’s advantages in accuracy and efficiency.
In [5], a new clustering classification algorithm based on dynamic time warping (DTW) was designed to analyze car park data with natural time series characteristics and periodicity. A time series clustering framework comprises four main steps: data preprocessing, distance measurement, clustering, and evaluation. Here, the authors proposed using the weekly periodicity to fill in the missing data values. Secondly, the DTW distance was employed as the distance measurement for the time series data of the car parks. Thirdly, the density-based partition around medoids (DBPAM) clustering method was adopted to the cluster data. Finally, the Purity metric was applied to evaluate the results. Several tests were conducted using 9 UCR datasets and data collected from 27 car parks operated by the Birmingham City Council NCP during two months of 2016. The results showed that the designed classifier exhibited a better performance than those based on Euclidean distance measurement and traditional clustering models based on DTW.

Author Contributions

Writing—original draft, L.A.; supervision, M.R.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The Guest Editors would like to thank all researchers who selected this Special Issue for submitting their work, the invited expert reviewers for their free and professional availability, MDPI Office, and the Algorithms Editorial Office for their assistance and support.

Conflicts of Interest

The guest editors declare no conflict of interest.

References

  1. Odongo, G.; Musabe, R.; Hanyurwimfura, D. A Multinomial DGA Classifier for Incipient Fault Detection in Oil-Impregnated Power Transformers. Algorithms 2021, 14, 128. [Google Scholar] [CrossRef]
  2. Lucena-Sánchez, E.; Sciavicco, G.; Stan, I.E. Feature and Language Selection in Temporal Symbolic Regression for Interpretable Air Quality Modelling. Algorithms 2021, 14, 76. [Google Scholar] [CrossRef]
  3. Kulathunga, N.; Ranasinghe, N.R.; Vrinceanu, D.; Kinsman, Z.; Huang, L.; Wang, Y. Effects of Nonlinearity and Network Architecture on the Performance of Supervised Neural Networks. Algorithms 2021, 14, 51. [Google Scholar] [CrossRef]
  4. Khang, T.D.; Vuong, N.D.; Tran, M.-K.; Fowler, M. Fuzzy C-Means Clustering Algorithm with Multiple Fuzzification Coefficients. Algorithms 2020, 13, 158. [Google Scholar] [CrossRef]
  5. Li, T.; Wu, X.; Zhang, J. Time Series Clustering Model based on DTW for Classifying Car Parks. Algorithms 2020, 13, 57. [Google Scholar] [CrossRef] [Green Version]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Antonelli, L.; Guarracino, M.R. Special Issue on Supervised and Unsupervised Classification Algorithms—Foreword from Guest Editors. Algorithms 2023, 16, 145. https://doi.org/10.3390/a16030145

AMA Style

Antonelli L, Guarracino MR. Special Issue on Supervised and Unsupervised Classification Algorithms—Foreword from Guest Editors. Algorithms. 2023; 16(3):145. https://doi.org/10.3390/a16030145

Chicago/Turabian Style

Antonelli, Laura, and Mario Rosario Guarracino. 2023. "Special Issue on Supervised and Unsupervised Classification Algorithms—Foreword from Guest Editors" Algorithms 16, no. 3: 145. https://doi.org/10.3390/a16030145

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop