Deep Neural Networks and Optimization Algorithms

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Combinatorial Optimization, Graph, and Network Algorithms".

Deadline for manuscript submissions: closed (15 June 2023) | Viewed by 19616

Special Issue Editors


E-Mail Website
Guest Editor
School of Mathematics and Physics, Anhui Jianzhu University, Hefei 230601, China
Interests: graph theory; combinatorial chemistry; network topology; modeling; statistical analysis
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Mathematics, COMSATS University Islamabad, Lahore Campus, Lahore 54000, Pakistan
Interests: networks; optimization; graph theory; fault tolerance
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer and Information Sciences, Northumbria University, Newcastle-upon-Tyne, UK
Interests: mathematics; complex systems; networks; computer science; physics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Deep neural networks and optimization algorithms have recently been proved useful to model a variety of complex systems, which can be used to analyze various network properties. They are widely used in reality, such as image processing, speech recognition, network science, etc. At present, scholars in different fields at home and abroad have carried out fruitful research in the field of deep neural networks and optimization algorithms. At the same time, with the deepening of research, many new problems and challenges are emerging. The aim of this Special Issue is to provide the latest theoretical methods or practical algorithms to deep neural networks and their applications. It is hoped that this issue can provide useful information and technical references for readers interested in this field, so as to promote deep neural network progress.

Dr. Jia-Bao Liu
Dr. M. Faisal Nadeem
Dr. Yilun Shang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep neural network algorithms
  • deep neural network design
  • graph models and graph algorithm complexity
  • deep neural network optimization
  • deep neural network architecture
  • evolutionary networks and algorithms
  • deep neural network applications
  • topological indices of complexity networks

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

2 pages, 163 KiB  
Editorial
Overview of the Special Issue on “Deep Neural Networks and Optimization Algorithms”
by Jia-Bao Liu, Muhammad Faisal Nadeem and Yilun Shang
Algorithms 2023, 16(11), 497; https://doi.org/10.3390/a16110497 - 26 Oct 2023
Viewed by 981
Abstract
Deep Neural Networks and Optimization Algorithms have many applications in engineering problems and scientific research [...] Full article
(This article belongs to the Special Issue Deep Neural Networks and Optimization Algorithms)

Research

Jump to: Editorial

25 pages, 16908 KiB  
Article
Design Optimization of Truss Structures Using a Graph Neural Network-Based Surrogate Model
by Navid Nourian, Mamdouh El-Badry and Maziar Jamshidi
Algorithms 2023, 16(8), 380; https://doi.org/10.3390/a16080380 - 07 Aug 2023
Cited by 1 | Viewed by 2500
Abstract
One of the primary objectives of truss structure design optimization is to minimize the total weight by determining the optimal sizes of the truss members while ensuring structural stability and integrity against external loads. Trusses consist of pin joints connected by straight members, [...] Read more.
One of the primary objectives of truss structure design optimization is to minimize the total weight by determining the optimal sizes of the truss members while ensuring structural stability and integrity against external loads. Trusses consist of pin joints connected by straight members, analogous to vertices and edges in a mathematical graph. This characteristic motivates the idea of representing truss joints and members as graph vertices and edges. In this study, a Graph Neural Network (GNN) is employed to exploit the benefits of graph representation and develop a GNN-based surrogate model integrated with a Particle Swarm Optimization (PSO) algorithm to approximate nodal displacements of trusses during the design optimization process. This approach enables the determination of the optimal cross-sectional areas of the truss members with fewer finite element model (FEM) analyses. The validity and effectiveness of the GNN-based optimization technique are assessed by comparing its results with those of a conventional FEM-based design optimization of three truss structures: a 10-bar planar truss, a 72-bar space truss, and a 200-bar planar truss. The results demonstrate the superiority of the GNN-based optimization, which can achieve the optimal solutions without violating constraints and at a faster rate, particularly for complex truss structures like the 200-bar planar truss problem. Full article
(This article belongs to the Special Issue Deep Neural Networks and Optimization Algorithms)
Show Figures

Figure 1

16 pages, 1971 KiB  
Article
Leaking Gas Source Tracking for Multiple Chemical Parks within An Urban City
by Junwei Lang, Zhenjia Zeng, Tengfei Ma and Sailing He
Algorithms 2023, 16(7), 342; https://doi.org/10.3390/a16070342 - 17 Jul 2023
Cited by 1 | Viewed by 1055
Abstract
Sudden air pollution accidents (explosions, fires, leaks, etc.) in chemical industry parks may result in great harm to people’s lives, property, and the ecological environment. A gas tracking network can monitor hazardous gas diffusion using traceability technology combined with sensors distributed within the [...] Read more.
Sudden air pollution accidents (explosions, fires, leaks, etc.) in chemical industry parks may result in great harm to people’s lives, property, and the ecological environment. A gas tracking network can monitor hazardous gas diffusion using traceability technology combined with sensors distributed within the scope of a chemical industry park. Such systems can automatically locate the source of pollutants in a timely manner and notify relevant departments to take major hazards into their control. However, tracing the source of the leak in a large area is still a tough problem, especially within an urban area. In this paper, the diffusion of 79 potential leaking sources with consideration of different weather conditions and complex urban terrain is simulated by AERMOD. Only 61 sensors are used to monitor the gas concentration within such a large scale. A fully connected network trained with a hybrid strategy is proposed to trace the leaking source effectively and robustly. Our proposed model reaches a final classification accuracy of 99.14%. Full article
(This article belongs to the Special Issue Deep Neural Networks and Optimization Algorithms)
Show Figures

Figure 1

22 pages, 2092 KiB  
Article
Implementing Deep Convolutional Neural Networks for QR Code-Based Printed Source Identification
by Min-Jen Tsai, Ya-Chu Lee and Te-Ming Chen
Algorithms 2023, 16(3), 160; https://doi.org/10.3390/a16030160 - 14 Mar 2023
Viewed by 3665
Abstract
QR codes (short for Quick Response codes) were originally developed for use in the automotive industry to track factory inventories and logistics, but their popularity has expanded significantly in the past few years due to the widespread applications of smartphones and mobile phone [...] Read more.
QR codes (short for Quick Response codes) were originally developed for use in the automotive industry to track factory inventories and logistics, but their popularity has expanded significantly in the past few years due to the widespread applications of smartphones and mobile phone cameras. QR codes can be used for a variety of purposes, including tracking inventory, advertising, electronic ticketing, and mobile payments. Although they are convenient and widely used to store and share information, their accessibility also means they might be forged easily. Digital forensics can be used to recognize direct links of printed documents, including QR codes, which is important for the investigation of forged documents and the prosecution of forgers. The process involves using optical mechanisms to identify the relationship between source printers and the duplicates. Techniques regarding computer vision and machine learning, such as convolutional neural networks (CNNs), can be implemented to study and summarize statistical features in order to improve identification accuracy. This study implemented AlexNet, DenseNet201, GoogleNet, MobileNetv2, ResNet, VGG16, and other Pretrained CNN models for evaluating their abilities to predict the source printer of QR codes with a high level of accuracy. Among them, the customized CNN model demonstrated better results in identifying printed sources of grayscale and color QR codes with less computational power and training time. Full article
(This article belongs to the Special Issue Deep Neural Networks and Optimization Algorithms)
Show Figures

Figure 1

12 pages, 1947 KiB  
Article
EEG Data Augmentation for Emotion Recognition with a Task-Driven GAN
by Qing Liu, Jianjun Hao and Yijun Guo
Algorithms 2023, 16(2), 118; https://doi.org/10.3390/a16020118 - 15 Feb 2023
Cited by 2 | Viewed by 1960
Abstract
The high cost of acquiring training data in the field of emotion recognition based on electroencephalogram (EEG) is a problem, making it difficult to establish a high-precision model from EEG signals for emotion recognition tasks. Given the outstanding performance of generative adversarial networks [...] Read more.
The high cost of acquiring training data in the field of emotion recognition based on electroencephalogram (EEG) is a problem, making it difficult to establish a high-precision model from EEG signals for emotion recognition tasks. Given the outstanding performance of generative adversarial networks (GANs) in data augmentation in recent years, this paper proposes a task-driven method based on CWGAN to generate high-quality artificial data. The generated data are represented as multi-channel EEG data differential entropy feature maps, and a task network (emotion classifier) is introduced to guide the generator during the adversarial training. The evaluation results show that the proposed method can generate artificial data with clearer classifications and distributions that are more similar to the real data, resulting in obvious improvements in EEG-based emotion recognition tasks. Full article
(This article belongs to the Special Issue Deep Neural Networks and Optimization Algorithms)
Show Figures

Figure 1

28 pages, 953 KiB  
Article
Enhancing Logistic Regression Using Neural Networks for Classification in Actuarial Learning
by George Tzougas and Konstantin Kutzkov
Algorithms 2023, 16(2), 99; https://doi.org/10.3390/a16020099 - 09 Feb 2023
Cited by 4 | Viewed by 2856
Abstract
We developed a methodology for the neural network boosting of logistic regression aimed at learning an additional model structure from the data. In particular, we constructed two classes of neural network-based models: shallow–dense neural networks with one hidden layer and deep neural networks [...] Read more.
We developed a methodology for the neural network boosting of logistic regression aimed at learning an additional model structure from the data. In particular, we constructed two classes of neural network-based models: shallow–dense neural networks with one hidden layer and deep neural networks with multiple hidden layers. Furthermore, several advanced approaches were explored, including the combined actuarial neural network approach, embeddings and transfer learning. The model training was achieved by minimizing either the deviance or the cross-entropy loss functions, leading to fourteen neural network-based models in total. For illustrative purposes, logistic regression and the alternative neural network-based models we propose are employed for a binary classification exercise concerning the occurrence of at least one claim in a French motor third-party insurance portfolio. Finally, the model interpretability issue was addressed via the local interpretable model-agnostic explanations approach. Full article
(This article belongs to the Special Issue Deep Neural Networks and Optimization Algorithms)
Show Figures

Figure 1

20 pages, 339 KiB  
Article
Online Batch Selection for Enhanced Generalization in Imbalanced Datasets
by George Ioannou, Georgios Alexandridis and Andreas Stafylopatis
Algorithms 2023, 16(2), 65; https://doi.org/10.3390/a16020065 - 18 Jan 2023
Cited by 2 | Viewed by 1691
Abstract
Importance sampling, a variant of online sampling, is often used in neural network training to improve the learning process, and, in particular, the convergence speed of the model. We study, here, the performance of a set of batch selection algorithms, namely, online sampling [...] Read more.
Importance sampling, a variant of online sampling, is often used in neural network training to improve the learning process, and, in particular, the convergence speed of the model. We study, here, the performance of a set of batch selection algorithms, namely, online sampling algorithms that process small parts of the dataset at each iteration. Convergence is accelerated through the creation of a bias towards the learning of hard samples. We first consider the baseline algorithm and investigate its performance in terms of convergence speed and generalization efficiency. The latter, however, is limited in case of poor balancing of data sets. To alleviate this shortcoming, we propose two variations of the algorithm that achieve better generalization and also manage to not undermine the convergence speed boost offered by the original algorithm. Various data transformation techniques were tested in conjunction with the proposed scheme to develop an overall training method of the model and to ensure robustness in different training environments. An experimental framework was constructed using three naturally imbalanced datasets and one artificially imbalanced one. The results assess the advantage in convergence of the extended algorithm over the vanilla one, but, mostly, show better generalization performance in imbalanced data environments. Full article
(This article belongs to the Special Issue Deep Neural Networks and Optimization Algorithms)
Show Figures

Figure 1

16 pages, 378 KiB  
Article
An Improved Bi-LSTM-Based Missing Value Imputation Approach for Pregnancy Examination Data
by Xinxi Lu, Lijuan Yuan, Ruifeng Li, Zhihuan Xing, Ning Yao and Yichun Yu
Algorithms 2023, 16(1), 12; https://doi.org/10.3390/a16010012 - 24 Dec 2022
Cited by 1 | Viewed by 2350
Abstract
In recent years, the development of computer technology has promoted the informatization and intelligentization of hospital management systems and thus produced a large amount of medical data. These medical data are valuable resources for research. We can obtain inducers and unknown symptoms that [...] Read more.
In recent years, the development of computer technology has promoted the informatization and intelligentization of hospital management systems and thus produced a large amount of medical data. These medical data are valuable resources for research. We can obtain inducers and unknown symptoms that can help discover diseases and make earlier diagnoses. Hypertensive disorder in pregnancy (HDP) is a common obstetric complication in pregnant women, which has severe adverse effects on the life safety of pregnant women and fetuses. However, the early and mid-term symptoms of HDP are not obvious, and there is no effective solution for it except for terminating the pregnancy. Therefore, detecting and preventing HDP is of great importance. This study aims at the preprocessing of pregnancy examination data, which serves as a part of HDP prediction. We found that the problem of missing data has a large impact on HDP prediction. Unlike general data, pregnancy examination data have high dimension and a high missing rate, are in a time series, and often have many non-linear relations. Current methods are not able to process the data effectively. To this end, we propose an improved bi-LSTM-based missing value imputation approach. It combines traditional machine learning and bidirectional LSTM to deal with missing data of pregnancy examination data. Our missing value imputation method obtains a good effect and improves the accuracy of the later prediction of HDP using examination data. Full article
(This article belongs to the Special Issue Deep Neural Networks and Optimization Algorithms)
Show Figures

Figure 1

Back to TopTop