Computational Intelligence and Machine Learning: Models and Applications

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 August 2024 | Viewed by 12908

Special Issue Editor


E-Mail Website
Guest Editor
Faculty of Electrical Engineering, Czestochowa University of Technology, 42-201 Częstochowa, Poland
Interests: machine learning; data mining; artificial intelligence; pattern recognition; evolutionary computation; their application to classification, regression, forecasting and optimization problems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Computational intelligence (CI) and machine learning (ML) are some of the most exciting fields in computing today. In recent decades, they have become an entrenched part of everyday life and have been successfully used to solve practical problems. The application area of CI and ML is very broad and includes engineering, industry, business, finance, medicine and many other domains. They cover a wide range of computational and learning algorithms, including classical ones such as linear regression, k-nearest neighbors and decision trees, as well as fuzzy systems, genetic, swarm and evolutionary algorithms, support vector machines and neural networks, and newly developed algorithms such as deep learning and boosted tree models. In practice, it is quite challenging to properly determine the appropriate architecture and parameters for CI and ML models so that the resulting model achieves a sound performance in both learning and generalization. Practical applications of CI and ML bring additional challenges, such as dealing with big, missing, distorted and uncertain data. In addition, interpretability is a paramount quality that CI and ML methods should achieve if they are to be applied in practice. Interpretability allows us to understand the model operation and raises confidence in its results.

This Special Issue focuses on CI and ML models and their applications in a diverse range of fields and problems. We welcome papers reporting substantive results on a wide range of computational and learning methods, discussing conceptualization of a problem, data representation, feature engineering, CI and ML models, critical comparisons with existing techniques and interpretation of results. Specific attention will be given to recently developed CI and ML methods such as deep learning and boosted tree models.

Dr. Grzegorz Dudek
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computational intelligence
  • machine learning
  • artificial intelligence
  • soft computing
  • fuzzy logic
  • evolutionary computing
  • neural networks
  • decision trees
  • deep learning
  • expert systems
  • data mining
  • supervised learning
  • unsupervised learning
  • reinforcement learning
  • probabilistic methods
  • knowledge representation
  • forecasting
  • big data
  • pattern recognition
  • natural language processing
  • computer vision
  • bioinformatics
  • information retrieval
  • sentiment analysis
  • recommendation systems
  • speech recognition

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 1584 KiB  
Article
Picture Fuzzy Soft Matrices and Application of Their Distance Measures to Supervised Learning: Picture Fuzzy Soft k-Nearest Neighbor (PFS-kNN)
by Samet Memiş
Electronics 2023, 12(19), 4129; https://doi.org/10.3390/electronics12194129 - 3 Oct 2023
Cited by 2 | Viewed by 1488
Abstract
This paper redefines picture fuzzy soft matrices (pfs-matrices) because of some of their inconsistencies resulting from Cuong’s definition of picture fuzzy sets. Then, it introduces several distance measures of pfs-matrices. Afterward, this paper proposes a new kNN-based classifier, namely [...] Read more.
This paper redefines picture fuzzy soft matrices (pfs-matrices) because of some of their inconsistencies resulting from Cuong’s definition of picture fuzzy sets. Then, it introduces several distance measures of pfs-matrices. Afterward, this paper proposes a new kNN-based classifier, namely the Picture Fuzzy Soft k-Nearest Neighbor (PFS-kNN) classifier. The proposed classifier utilizes the Minkowski’s metric of pfs-matrices to find the k-nearest neighbor. Thereafter, it performs an experimental study utilizing four UCI medical datasets and compares to the suggested approach using the state-of-the-art kNN-based classifiers. To evaluate the performance of the classification, it conducts ten iterations of five-fold cross-validation on all the classifiers. The findings indicate that PFS-kNN surpasses the state-of-the-art kNN-based algorithms in 72 out of 128 performance results based on accuracy, precision, recall, and F1-score. More specifically, the proposed method achieves higher accuracy and F1-score results compared to the other classifiers. Simulation results show that pfs-matrices and PFS-kNN are capable of modeling uncertainty and real-world problems. Finally, the applications of pfs-matrices to supervised learning are discussed for further research. Full article
Show Figures

Figure 1

16 pages, 2102 KiB  
Article
Research on Log Anomaly Detection Based on Sentence-BERT
by Caiping Hu, Xuekui Sun, Hua Dai, Hangchuan Zhang and Haiqiang Liu
Electronics 2023, 12(17), 3580; https://doi.org/10.3390/electronics12173580 - 24 Aug 2023
Viewed by 1487
Abstract
Log anomaly detection is crucial for computer systems. By analyzing and processing the logs generated by a system, abnormal events or potential problems in the system can be identified, which is helpful for its stability and reliability. At present, due to the expansion [...] Read more.
Log anomaly detection is crucial for computer systems. By analyzing and processing the logs generated by a system, abnormal events or potential problems in the system can be identified, which is helpful for its stability and reliability. At present, due to the expansion of the scale and complexity of software systems, the amount of log data grows enormously, and traditional detection methods have been unable to detect system anomalies in time. Therefore, it is important to design log anomaly detection methods with high accuracy and strong generalization. In this paper, we propose the log anomaly detection method LogADSBERT, which is based on Sentence-BERT. This method adopts the Sentence-BERT model to extract the semantic behavior characteristics of log events and implements anomaly detection through the bidirectional recurrent neural network, Bi-LSTM. Experiments on the open log data set show that the accuracy of LogADSBERT is better than that of the existing log anomaly detection methods. Moreover, LogADSBERT is robust even under the scenario of new log event injections. Full article
Show Figures

Figure 1

17 pages, 598 KiB  
Article
GGTr: An Innovative Framework for Accurate and Realistic Human Motion Prediction
by Biaozhang Huang and Xinde Li
Electronics 2023, 12(15), 3305; https://doi.org/10.3390/electronics12153305 - 1 Aug 2023
Viewed by 1488
Abstract
Human motion prediction involves forecasting future movements based on past observations, which is a complex task due to the inherent spatial-temporal dynamics of human motion. In this paper, we introduced a novel framework, GGTr, which adeptly encapsulates these patterns by integrating positional graph [...] Read more.
Human motion prediction involves forecasting future movements based on past observations, which is a complex task due to the inherent spatial-temporal dynamics of human motion. In this paper, we introduced a novel framework, GGTr, which adeptly encapsulates these patterns by integrating positional graph convolutional network (GCN) layers, gated recurrent unit (GRU) network layers, and transformer layers. The proposed model utilizes an enhanced GCN layer equipped with a positional representation to aggregate information from body joints more effectively. To address temporal dependencies, we strategically combined GRU and transformer layers, enabling the model to capture both local and global temporal dependencies across body joints. Through extensive experiments conducted on Human3.6M and CMU-MoCap datasets, we demonstrated the superior performance of our proposed model. Notably, our framework shows significant improvements in predicting long-term movements, outperforming state-of-the-art methods substantially. Full article
Show Figures

Figure 1

19 pages, 1845 KiB  
Article
Preference-Aware Light Graph Convolution Network for Social Recommendation
by Haoyu Xu, Guodong Wu, Enting Zhai, Xiu Jin and Lijing Tu
Electronics 2023, 12(11), 2397; https://doi.org/10.3390/electronics12112397 - 25 May 2023
Viewed by 1125
Abstract
Social recommendation systems leverage the abundant social information of users existing in the current Internet to mitigate the problem of data sparsity, ultimately enhancing recommendation performance. However, most existing recommendation systems that introduce social information ignore the negative messages passed by high-order neighbor [...] Read more.
Social recommendation systems leverage the abundant social information of users existing in the current Internet to mitigate the problem of data sparsity, ultimately enhancing recommendation performance. However, most existing recommendation systems that introduce social information ignore the negative messages passed by high-order neighbor nodes and aggregate messages without filtering, which results in a decline in the performance of the recommendation system. Considering this problem, we propose a novel social recommendation model based on graph neural networks (GNNs) called the preference-aware light graph convolutional network (PLGCN), which contains a subgraph construction module using unsupervised learning to classify users according to their embeddings and then assign users with similar preferences to a subgraph to filter useless or even negative messages from users with different preferences to attain even better recommendation performance. We also designed a feature aggregation module to better combine user embeddings with social and interaction information. In addition, we employ a lightweight GNN framework to aggregate messages from neighbors, removing nonlinear activation and feature transformation operations to alleviate the overfitting problem. Finally, we carried out comprehensive experiments using two publicly available datasets, and the results indicate that PLGCN outperforms the current state-of-the-art (SOTA) method, especially in dealing with the problem of cold start. The proposed model has the potential for practical applications in online recommendation systems, such as e-commerce, social media, and content recommendation. Full article
Show Figures

Figure 1

19 pages, 7745 KiB  
Article
Meteorological Variables Forecasting System Using Machine Learning and Open-Source Software
by Jenny Aracely Segovia, Jonathan Fernando Toaquiza, Jacqueline Rosario Llanos and David Raimundo Rivas
Electronics 2023, 12(4), 1007; https://doi.org/10.3390/electronics12041007 - 17 Feb 2023
Cited by 5 | Viewed by 2496
Abstract
The techniques for forecasting meteorological variables are highly studied since prior knowledge of them allows for the efficient management of renewable energies, and also for other applications of science such as agriculture, health, engineering, energy, etc. In this research, the design, implementation, and [...] Read more.
The techniques for forecasting meteorological variables are highly studied since prior knowledge of them allows for the efficient management of renewable energies, and also for other applications of science such as agriculture, health, engineering, energy, etc. In this research, the design, implementation, and comparison of forecasting models for meteorological variables have been performed using different Machine Learning techniques as part of Python open-source software. The techniques implemented include multiple linear regression, polynomial regression, random forest, decision tree, XGBoost, and multilayer perceptron neural network (MLP). To identify the best technique, the mean square error (RMSE), mean absolute percentage error (MAPE), mean absolute error (MAE), and coefficient of determination (R2) are used as evaluation metrics. The most efficient techniques depend on the variable to be forecasting, however, it is noted that for most of them, random forest and XGBoost techniques present better performance. For temperature, the best performing technique was Random Forest with an R2 of 0.8631, MAE of 0.4728 °C, MAPE of 2.73%, and RMSE of 0.6621 °C; for relative humidity, was Random Forest with an R2 of 0.8583, MAE of 2.1380RH, MAPE of 2.50% and RMSE of 2.9003 RH; for solar radiation, was Random Forest with an R2 of 0.7333, MAE of 65.8105 W/m2, and RMSE of 105.9141 W/m2; and for wind speed, was Random Forest with an R2 of 0.3660, MAE of 0.1097 m/s, and RMSE of 0.2136 m/s. Full article
Show Figures

Figure 1

17 pages, 757 KiB  
Article
Self-Supervised Graph Attention Collaborative Filtering for Recommendation
by Jiangqiang Zhu, Kai Li, Jinjia Peng and Jing Qi
Electronics 2023, 12(4), 793; https://doi.org/10.3390/electronics12040793 - 5 Feb 2023
Cited by 3 | Viewed by 1475
Abstract
Due to the complementary nature of graph neural networks and structured data in recommendations, recommendation systems using graph neural network techniques have become mainstream. However, there are still problems, such as sparse supervised signals and interaction noise, in the recommendation task. Therefore, this [...] Read more.
Due to the complementary nature of graph neural networks and structured data in recommendations, recommendation systems using graph neural network techniques have become mainstream. However, there are still problems, such as sparse supervised signals and interaction noise, in the recommendation task. Therefore, this paper proposes a self-supervised graph attention collaborative filtering for recommendation (SGACF). The correlation between adjacent nodes is deeply mined using a multi-head graph attention network to obtain accurate node representations. It is worth noting that self-supervised learning is brought in as an auxiliary task in the recommendation, where the supervision task is the main task. It assists model training for supervised tasks. A multi-view of a node is generated by the graph data-augmentation method. We maximize the consistency between its different views compared to the views of the same node and minimize the consistency between its different views compared to the views of other nodes. In this paper, the effectiveness of the method is illustrated by abundant experiments on three public datasets. The results show its significant improvement in the accuracy of the long-tail item recommendation and the robustness of the model. Full article
Show Figures

Figure 1

16 pages, 3119 KiB  
Article
ArRASA: Channel Optimization for Deep Learning-Based Arabic NLU Chatbot Framework
by Meshrif Alruily
Electronics 2022, 11(22), 3745; https://doi.org/10.3390/electronics11223745 - 15 Nov 2022
Cited by 2 | Viewed by 2080
Abstract
Since the introduction of deep learning-based chatbots for knowledge services, many research and development efforts have been undertaken in a variety of fields. The global market for chatbots has grown dramatically as a result of strong demand. Nevertheless, open-domain chatbots’ limited functional scalability [...] Read more.
Since the introduction of deep learning-based chatbots for knowledge services, many research and development efforts have been undertaken in a variety of fields. The global market for chatbots has grown dramatically as a result of strong demand. Nevertheless, open-domain chatbots’ limited functional scalability poses a challenge to their implementation in industries. Much work has been performed on creating chatbots for languages such as English, Chinese, etc. Still, there is a need to develop chatbots for other languages such as Arabic, Persian, etc., as they are widely used on the Internet today. In this paper, we introduce, ArRASA as a channel optimization strategy based on a deep-learning platform to create a chatbot that understands Arabic. ArRASA is a closed-domain chatbot that can be used in any Arabic industry. The proposed system consists of four major parts. These parts include tokenization of text, featurization, intent categorization and entity extraction. The performance of ArRASA is evaluated using traditional assessment metrics, i.e., accuracy and F1 score for the intent classification and entity extraction tasks in the Arabic language. The proposed framework archives promising results by securing 96%, 94% and 94%, 95% accuracy and an F1 score for intent classification and entity extraction, respectively. Full article
Show Figures

Figure 1

Back to TopTop