Soft Computing and Uncertainty Learning with Applications

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Engineering Mathematics".

Deadline for manuscript submissions: 30 September 2024 | Viewed by 13838

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China
Interests: machine learning; soft computing; image analysis; data mining

E-Mail Website
Guest Editor
School of Computer Science and Technology, Anhui University, Hefei 230601, China
Interests: granular computing; network representation learning; knowledge graph; social network analysis

E-Mail Website
Guest Editor
College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, Guangdong 518060, China
Interests: uncertainty analysis; pattern recognition; intelligent systems

Special Issue Information

Dear Colleagues,

Soft computing methodologies, including fuzzy sets, rough sets, evidence theory and other flexible mathematical models, provide an effective theoretical framework to formulate and process uncertain data and knowledge. In recent years, soft computing methods have experienced rapid developments through being combined with machine learning and other artificial intelligence techniques, and have also achieved successful applications in various kinds of intelligent data analysis tasks.

This Special Issue will focus on recent theoretical and computational studies of soft computing models, algorithms, systems and applications. Topics include, but are not limited to:

  1. Fundamental soft computing models.
  2. Granular computing methodologies.
  3. Three-way decision methodologies.
  4. Uncertain and approximate reasoning.
  5. Uncertain machine learning with soft computing.
  6. Machine learning for uncertain data and knowledge.
  7. Expert systems based on soft computing and machine learning.
  8. Data applications of social media, business intelligence, medicine and healthcare, bioinformatics manufacturing, cybernetics and robotics, etc.

Crossfield theoretical studies and applications in soft computing and machine learning are particularly welcome in this Special Issue.

Prof. Dr. Xiaodong Yue
Prof. Dr. Shu Zhao
Prof. Dr. Jie Zhou
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • soft computing
  • machine learning
  • uncertainty theory
  • expert systems
  • data applications

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 515 KiB  
Article
Stream Convolution for Attribute Reduction of Concept Lattices
by Jianfeng Xu, Chenglei Wu, Jilin Xu, Lan Liu and Yuanjian Zhang
Mathematics 2023, 11(17), 3739; https://doi.org/10.3390/math11173739 - 30 Aug 2023
Viewed by 626
Abstract
Attribute reduction is a crucial research area within concept lattices. However, the existing works are mostly limited to either increment or decrement algorithms, rather than considering both. Therefore, dealing with large-scale streaming attributes in both cases may be inefficient. Convolution calculation in deep [...] Read more.
Attribute reduction is a crucial research area within concept lattices. However, the existing works are mostly limited to either increment or decrement algorithms, rather than considering both. Therefore, dealing with large-scale streaming attributes in both cases may be inefficient. Convolution calculation in deep learning involves a dynamic data processing method in the form of sliding windows. Inspired by this, we adopt slide-in and slide-out windows in convolution calculation to update attribute reduction. Specifically, we study the attribute changing mechanism in the sliding window mode of convolution and investigate five attribute variation cases. These cases consider the respective intersection of slide-in and slide-out attributes, i.e., equal to, disjoint with, partially joint with, containing, and contained by. Then, we propose an updated solution of the reduction set for simultaneous sliding in and out of attributes. Meanwhile, we propose the CLARA-DC algorithm, which aims to solve the problem of inefficient attribute reduction for large-scale streaming data. Finally, through the experimental comparison on four UCI datasets, CLARA-DC achieves higher efficiency and scalability in dealing with large-scale datasets. It can adapt to varying types and sizes of datasets, boosting efficiency by an average of 25%. Full article
(This article belongs to the Special Issue Soft Computing and Uncertainty Learning with Applications)
Show Figures

Figure 1

16 pages, 1779 KiB  
Article
An Unsupervised Rapid Network Alignment Framework via Network Coarsening
by Lei Zhang, Feng Qian, Jie Chen and Shu Zhao
Mathematics 2023, 11(3), 573; https://doi.org/10.3390/math11030573 - 21 Jan 2023
Cited by 1 | Viewed by 1260
Abstract
Network alignment aims to identify the correspondence of nodes between two or more networks. It is the cornerstone of many network mining tasks, such as cross-platform recommendation and cross-network data aggregation. Recently, with the development of network representation learning techniques, researchers have proposed [...] Read more.
Network alignment aims to identify the correspondence of nodes between two or more networks. It is the cornerstone of many network mining tasks, such as cross-platform recommendation and cross-network data aggregation. Recently, with the development of network representation learning techniques, researchers have proposed many embedding-based network alignment methods. The effect is better than traditional methods. However, several issues and challenges remain for network alignment tasks, such as lack of labeled data, mapping across network embedding spaces, and computational efficiency. Based on the graph neural network (GNN), we propose the URNA (unsupervised rapid network alignment) framework to achieve an effective balance between accuracy and efficiency. There are two phases: model training and network alignment. We exploit coarse networks to accelerate the training of GNN after first compressing the original networks into small networks. We also use parameter sharing to guarantee the consistency of embedding spaces and an unsupervised loss function to update the parameters. In the network alignment phase, we first use a once-pass forward propagation to learn node embeddings of original networks, and then we use multi-order embeddings from the outputs of all convolutional layers to calculate the similarity of nodes between the two networks via vector inner product for alignment. Experimental results on real-world datasets show that the proposed method can significantly reduce running time and memory requirements while guaranteeing alignment performance. Full article
(This article belongs to the Special Issue Soft Computing and Uncertainty Learning with Applications)
Show Figures

Figure 1

15 pages, 2028 KiB  
Article
A Novel Neighborhood Granular Meanshift Clustering Algorithm
by Qiangqiang Chen, Linjie He, Yanan Diao, Kunbin Zhang, Guoru Zhao and Yumin Chen
Mathematics 2023, 11(1), 207; https://doi.org/10.3390/math11010207 - 31 Dec 2022
Cited by 3 | Viewed by 2157
Abstract
The most popular algorithms used in unsupervised learning are clustering algorithms. Clustering algorithms are used to group samples into a number of classes or clusters based on the distances of the given sample features. Therefore, how to define the distance between samples is [...] Read more.
The most popular algorithms used in unsupervised learning are clustering algorithms. Clustering algorithms are used to group samples into a number of classes or clusters based on the distances of the given sample features. Therefore, how to define the distance between samples is important for the clustering algorithm. Traditional clustering algorithms are generally based on the Mahalanobis distance and Minkowski distance, which have difficulty dealing with set-based data and uncertain nonlinear data. To solve this problem, we propose the granular vectors relative distance and granular vectors absolute distance based on the neighborhood granule operation. Further, the neighborhood granular meanshift clustering algorithm is also proposed. Finally, the effectiveness of neighborhood granular meanshift clustering is proved from two aspects of internal metrics (Accuracy and Fowlkes–Mallows Index) and external metric (Silhouette Coeffificient) on multiple datasets from UC Irvine Machine Learning Repository (UCI). We find that the granular meanshift clustering algorithm has a better clustering effect than the traditional clustering algorithms, such as Kmeans, Gaussian Mixture and so on. Full article
(This article belongs to the Special Issue Soft Computing and Uncertainty Learning with Applications)
Show Figures

Figure 1

16 pages, 4544 KiB  
Article
Open-Set Recognition Model Based on Negative-Class Sample Feature Enhancement Learning Algorithm
by Guowei Yang, Shijie Zhou and Minghua Wan
Mathematics 2022, 10(24), 4725; https://doi.org/10.3390/math10244725 - 12 Dec 2022
Cited by 2 | Viewed by 1371
Abstract
In order to solve the problem that the F1-measure value and the AUROC value of some classical open-set classifier methods do not exceed 40% in high-openness scenarios, this paper proposes an algorithm combining negative-class feature enhancement learning and a Weibull distribution based on [...] Read more.
In order to solve the problem that the F1-measure value and the AUROC value of some classical open-set classifier methods do not exceed 40% in high-openness scenarios, this paper proposes an algorithm combining negative-class feature enhancement learning and a Weibull distribution based on an extreme value theory representation method, which can effectively reduce the risk of open space in open-set scenarios. Firstly, the solution uses the negative-class sample feature enhancement learning algorithm to generate the negative sample point set of similar features and then compute the corresponding negative-class sample feature segmentation hypersphere. Secondly, the paired Weibull distributions from positive and negative samples are established based on the corresponding negative-class sample feature segmentation hypersphere of each class. Finally, solutions for non-linear multi-class classifications are constructed by using the Weibull and reverse Weibull distributions. Experiments on classic open datasets such as the open dataset of letter recognition, the Caltech256 open dataset, and the CIFAR100 open dataset show that when the openness is greater than 60%, the performance of the proposed method is significantly higher than other open-set support vector classifier algorithms, and the average is more than 7% higher. Full article
(This article belongs to the Special Issue Soft Computing and Uncertainty Learning with Applications)
Show Figures

Figure 1

13 pages, 436 KiB  
Article
Reliable Multi-View Deep Patent Classification
by Liyuan Zhang, Wei Liu, Yufei Chen and Xiaodong Yue
Mathematics 2022, 10(23), 4545; https://doi.org/10.3390/math10234545 - 01 Dec 2022
Cited by 3 | Viewed by 1775
Abstract
Patent classification has long been regarded as a crucial task in patent information management and patent knowledge mining. In recent years, studies combining deep learning automatic patent classification methods with deep neural networks have significantly increased. Although great efforts have been made in [...] Read more.
Patent classification has long been regarded as a crucial task in patent information management and patent knowledge mining. In recent years, studies combining deep learning automatic patent classification methods with deep neural networks have significantly increased. Although great efforts have been made in the patent deep classification task, they mainly focus on information extraction from a single view (e.g., title or abstract view), but few studies concern multi-view deep patent classification, which aims to improve patent classification performance by integrating information from different views. To that end, we propose a reliable multi-view deep patent classification method. Within this method, we fuse multi-view patent information at the evidence level from the perspective of evidence theory, which not only effectively improves classification performance but also provides a reliable uncertainty estimation to solve the unreliability of classification results caused by property differences and inconsistencies in the different patent information sources. In addition, we theoretically prove that our approach can reduce the uncertainty of classification results through the fusion of multiple patent views, thus facilitating the performance and reliability of the classification results. The experimental results on 759,809 real-world multi-view patent data in Shanghai, China, demonstrate the effectiveness, reliability, and robustness of our approach. Full article
(This article belongs to the Special Issue Soft Computing and Uncertainty Learning with Applications)
Show Figures

Figure 1

15 pages, 410 KiB  
Article
A Novel Ensemble Strategy Based on Determinantal Point Processes for Transfer Learning
by Ying Lv, Bofeng Zhang, Xiaodong Yue and Zhikang Xu
Mathematics 2022, 10(23), 4409; https://doi.org/10.3390/math10234409 - 23 Nov 2022
Viewed by 1250
Abstract
Transfer learning (TL) hopes to train a model for target domain tasks by using knowledge from different but related source domains. Most TL methods focus more on improving the predictive performance of the single model across domains. Since domain differences cannot be avoided, [...] Read more.
Transfer learning (TL) hopes to train a model for target domain tasks by using knowledge from different but related source domains. Most TL methods focus more on improving the predictive performance of the single model across domains. Since domain differences cannot be avoided, the knowledge from the source domain to obtain the target domain is limited. Therefore, the transfer model has to predict out-of-distribution (OOD) data in the target domain. However, the prediction of the single model is unstable when dealing with the OOD data, which can easily cause negative transfer. To solve this problem, we propose a parallel ensemble strategy based on Determinantal Point Processes (DPP) for transfer learning. In this strategy, we first proposed an improved DPP sampling to generate training subsets with higher transferability and diversity. Second, we use the subsets to train the base models. Finally, the base models are fused using the adaptability of subsets. To validate the effectiveness of the ensemble strategy, we couple the ensemble strategy into traditional TL models and deep TL models and evaluate the transfer performance models on text and image data sets. The experiment results show that our proposed ensemble strategy can significantly improve the performance of the transfer model. Full article
(This article belongs to the Special Issue Soft Computing and Uncertainty Learning with Applications)
Show Figures

Figure 1

15 pages, 1915 KiB  
Article
Granular Elastic Network Regression with Stochastic Gradient Descent
by Linjie He, Yumin Chen, Caiming Zhong and Keshou Wu
Mathematics 2022, 10(15), 2628; https://doi.org/10.3390/math10152628 - 27 Jul 2022
Cited by 8 | Viewed by 1488
Abstract
Linear regression is the use of linear functions to model the relationship between a dependent variable and one or more independent variables. Linear regression models have been widely used in various fields such as finance, industry, and medicine. To address the problem that [...] Read more.
Linear regression is the use of linear functions to model the relationship between a dependent variable and one or more independent variables. Linear regression models have been widely used in various fields such as finance, industry, and medicine. To address the problem that the traditional linear regression model is difficult to handle uncertain data, we propose a granule-based elastic network regression model. First we construct granules and granular vectors by granulation methods. Then, we define multiple granular operation rules so that the model can effectively handle uncertain data. Further, the granular norm and the granular vector norm are defined to design the granular loss function and construct the granular elastic network regression model. After that, we conduct the derivative of the granular loss function and design the granular elastic network gradient descent optimization algorithm. Finally, we performed experiments on the UCI datasets to verify the validity of the granular elasticity network. We found that the granular elasticity network has the advantage of good fit compared with the traditional linear regression model. Full article
(This article belongs to the Special Issue Soft Computing and Uncertainty Learning with Applications)
Show Figures

Figure 1

18 pages, 3478 KiB  
Article
A New Bilinear Supervised Neighborhood Discrete Discriminant Hashing
by Xueyu Chen, Minghua Wan, Hao Zheng, Chao Xu, Chengli Sun and Zizhu Fan
Mathematics 2022, 10(12), 2110; https://doi.org/10.3390/math10122110 - 17 Jun 2022
Cited by 2 | Viewed by 1070
Abstract
Feature extraction is an important part of perceptual hashing. How to compress the robust features of images into hash codes has become a hot research topic. Converting a two-dimensional image into a one-dimensional descriptor requires a higher computational cost and is not optimal. [...] Read more.
Feature extraction is an important part of perceptual hashing. How to compress the robust features of images into hash codes has become a hot research topic. Converting a two-dimensional image into a one-dimensional descriptor requires a higher computational cost and is not optimal. In order to maintain the internal feature structure of the original two-dimensional image, a new Bilinear Supervised Neighborhood Discrete Discriminant Hashing (BNDDH) algorithm is proposed in this paper. Firstly, the algorithm constructs two new neighborhood graphs to maintain the geometric relationship between samples and reduces the quantization loss by directly constraining the hash codes. Secondly, two small rotation matrices are used to realize the bilinear projection of the two-dimensional descriptor. Finally, the experiment verifies the performance of the BNDDH algorithm under different feature types, such as image original pixels and a Convolutional Neural Network (CNN)-based AlexConv5 feature. The experimental results and discussion clearly show that the proposed BNDDH algorithm is better than the existing traditional hashing algorithm and can represent the image more efficiently in this paper. Full article
(This article belongs to the Special Issue Soft Computing and Uncertainty Learning with Applications)
Show Figures

Figure 1

21 pages, 1698 KiB  
Article
Intuitionistic Fuzzy-Based Three-Way Label Enhancement for Multi-Label Classification
by Tianna Zhao, Yuanjian Zhang and Duoqian Miao
Mathematics 2022, 10(11), 1847; https://doi.org/10.3390/math10111847 - 27 May 2022
Cited by 2 | Viewed by 1403
Abstract
Multi-label classification deals with the determination of instance-label associations for unseen instances. Although many margin-based approaches are delicately developed, the uncertainty classifications for those with smaller separation margins remain unsolved. The intuitionistic fuzzy set is an effective tool to characterize the concept of [...] Read more.
Multi-label classification deals with the determination of instance-label associations for unseen instances. Although many margin-based approaches are delicately developed, the uncertainty classifications for those with smaller separation margins remain unsolved. The intuitionistic fuzzy set is an effective tool to characterize the concept of uncertainty, yet it has not been examined for multi-label cases. This paper proposed a novel model called intuitionistic fuzzy three-way label enhancement (IFTWLE) for multi-label classification. The IFTWLE combines label enhancement with an intuitionistic fuzzy set under the framework of three-way decisions. For unseen instances, we generated the pseudo-label for label uncertainty evaluation from a logical label-based model. An intuitionistic fuzzy set-based instance selection principle seamlessly bridges logical label learning and numerical label learning. The principle is hierarchically developed. At the label level, membership and non-membership functions are pair-wisely defined to measure the local uncertainty and generate candidate uncertain instances. After upgrading to the instance level, we select instances from the candidates for label enhancement, whereas they remained unchanged for the remaining. To the best of our knowledge, this is the first attempt to combine logical label learning with numerical label learning into a unified framework for minimizing classification uncertainty. Extensive experiments demonstrate that, with the selectively reconstructed label importance, IFTWLE achieves statistically superior over the state-of-the-art multi-label classification algorithms in terms of classification accuracy. The computational complexity of this algorithm is On2mk, where n, m, and k denote the unseen instances count, label count, and average label-specific feature size, respectively. Full article
(This article belongs to the Special Issue Soft Computing and Uncertainty Learning with Applications)
Show Figures

Figure 1

Back to TopTop