Mathematics of Neural Networks: Models, Algorithms and Applications

A special issue of Axioms (ISSN 2075-1680). This special issue belongs to the section "Mathematical Analysis".

Deadline for manuscript submissions: closed (20 March 2024) | Viewed by 3334

Special Issue Editors


E-Mail Website
Guest Editor
Department of Aeronautical Engineering, Chaoyang University of Technology, Taichung 413310, Taiwan
Interests: neuro-fuzzy systems; fuzzy systems; fuzzy theory; cloud manufacturing and services
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue of Axioms aims to explore the mathematics, neural networks, algorithms, and computational theory underlying neural networks and their applications. This Special Issue is dedicated to original research and recent developments in mathematical methods and covers all topics related to mathematical methods and computer science. In this issue, topics related to the mathematics of neural networks models, as well as algorithms and their applications are welcomed, including but not limited to: physical, engineering, medical, and social systems.

Dr. Yu-Cheng Wang
Prof. Dr. Tin-Chih Toly Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Axioms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • neuro-fuzzy systems
  • mathematical
  • fuzzy systems
  • neural networks
  • algorithms and applications

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 419 KiB  
Article
Distance Metric Optimization-Driven Neural Network Learning Framework for Pattern Classification
by Yimeng Jiang, Guolin Yu and Jun Ma
Axioms 2023, 12(8), 765; https://doi.org/10.3390/axioms12080765 - 03 Aug 2023
Viewed by 717
Abstract
As a novel neural network learning framework, Twin Extreme Learning Machine (TELM) has received extensive attention and research in the field of machine learning. However, TELM is affected by noise or outliers in practical applications so that its generalization performance is reduced compared [...] Read more.
As a novel neural network learning framework, Twin Extreme Learning Machine (TELM) has received extensive attention and research in the field of machine learning. However, TELM is affected by noise or outliers in practical applications so that its generalization performance is reduced compared to robust learning algorithms. In this paper, we propose two novel distance metric optimization-driven robust twin extreme learning machine learning frameworks for pattern classification, namely, CWTELM and FCWTELM. By introducing the robust Welsch loss function and capped L2,p-distance metric, our methods reduce the effect of outliers and improve the generalization performance of the model compared to TELM. In addition, two efficient iterative algorithms are designed to solve the challenges brought by the non-convex optimization problems CWTELM and FCWTELM, and we theoretically guarantee their convergence, local optimality, and computational complexity. Then, the proposed algorithms are compared with five other classical algorithms under different noise and different datasets, and the statistical detection analysis is implemented. Finally, we conclude that our algorithm has excellent robustness and classification performance. Full article
(This article belongs to the Special Issue Mathematics of Neural Networks: Models, Algorithms and Applications)
Show Figures

Figure 1

27 pages, 478 KiB  
Article
A Novel Robust Metric Distance Optimization-Driven Manifold Learning Framework for Semi-Supervised Pattern Classification
by Bao Ma, Jun Ma and Guolin Yu
Axioms 2023, 12(8), 737; https://doi.org/10.3390/axioms12080737 - 27 Jul 2023
Viewed by 735
Abstract
In this work, we address the problem of improving the classification performance of machine learning models, especially in the presence of noisy and outlier data. To this end, we first innovatively design a generalized adaptive robust loss function called  [...] Read more.
In this work, we address the problem of improving the classification performance of machine learning models, especially in the presence of noisy and outlier data. To this end, we first innovatively design a generalized adaptive robust loss function called Vθ(x). Intuitively, Vθ(x) can improve the robustness of the model by selecting different robust loss functions for different learning tasks during the learning process via the adaptive parameter θ. Compared with other robust loss functions, Vθ(x) has some desirable salient properties, such as symmetry, boundedness, robustness, nonconvexity, and adaptivity, making it suitable for a wide range of machine learning applications. Secondly, a new robust semi-supervised learning framework for pattern classification is proposed. In this learning framework, the proposed robust loss function Vθ(x) and capped L2,p-norm robust distance metric are introduced to improve the robustness and generalization performance of the model, especially when the outliers are far from the normal data distributions. Simultaneously, based on this learning framework, the Welsch manifold robust twin bounded support vector machine (WMRTBSVM) and its least-squares version are developed. Finally, two effective iterative optimization algorithms are designed, their convergence is proved, and their complexity is calculated. Experimental results on several datasets with different noise settings and different evaluation criteria show that our methods have better classification performance and robustness. With the Cancer dataset, when there is no noise, the classification accuracy of our proposed methods is 94.17% and 95.62%, respectively. When the Gaussian noise is 50%, the classification accuracy of our proposed methods is 91.76% and 90.59%, respectively, demonstrating that our method has satisfactory classification performance and robustness. Full article
(This article belongs to the Special Issue Mathematics of Neural Networks: Models, Algorithms and Applications)
Show Figures

Figure 1

13 pages, 450 KiB  
Article
Information Processing with Stability Point Modeling in Cohen–Grossberg Neural Networks
by Ekaterina Gospodinova and Ivan Torlakov
Axioms 2023, 12(7), 612; https://doi.org/10.3390/axioms12070612 - 21 Jun 2023
Viewed by 713
Abstract
The aim of this article is to develop efficient methods of expressing multilevel structured information from various modalities (images, speech, and text) in order to naturally duplicate the structure as it occurs in the human brain. A number of theoretical and practical issues, [...] Read more.
The aim of this article is to develop efficient methods of expressing multilevel structured information from various modalities (images, speech, and text) in order to naturally duplicate the structure as it occurs in the human brain. A number of theoretical and practical issues, including the creation of a mathematical model with a stability point, an algorithm, and software implementation for the processing of offline information; the representation of neural networks; and long-term synchronization of the various modalities, must be resolved in order to achieve the goal. An artificial neural network (ANN) of the Cohen–Grossberg type was used to accomplish the objectives. The research techniques reported herein are based on the theory of pattern recognition, as well as speech, text, and image processing algorithms. Full article
(This article belongs to the Special Issue Mathematics of Neural Networks: Models, Algorithms and Applications)
Show Figures

Figure 1

Back to TopTop