Artificial Intelligence and Scientific Computing: Mathematical Techniques and Applications

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Computational and Applied Mathematics".

Deadline for manuscript submissions: 20 November 2024 | Viewed by 6947

Special Issue Editors

Computer Network Information Center, Chinese Academy of Sciences, Beijing 100083, China
Interests: artificial intelligence; high performance computing

E-Mail Website
Guest Editor
Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
Interests: parallel computing; compiler; programming language; performance evaluation
Institute of Computing Technology, Chinese Academy of Science, Beijing 100190, China
Interests: high performance computing; artificial intelligence; first principles computing

Special Issue Information

Dear Colleagues,

Artificial intelligence technology and scientific computing are revolutionizing virtually every application domain, including computational chemistry, industrial control, computational biology, quantum computing, material discovery, etc.

Scientific computing reveals the laws of the real-world using simulation computing. It makes a bridge to connect theory and phenomenon. Many phenomena can be simulated properly by solving mathematic equations on computers. However, many problems (e.g., optimizing decision-making problems and constrained optimization problems) are beyond the capabilities of the traditional numerical approach and require more powerful and more efficient computational technology. Meanwhile, scientific computing will generate massive data under large-scale simulations. Compared with traditional data, scientific data have their domain knowledge background. Analyzing scientific data is laborious and expensive due to the high cost of traditional data analysis methods.

Therefore, scientific computing research combined with artificial intelligence technology has become a growing research trend to promote scientific discovery.

The primary aim of this Special Issue is to bring together original research discussing innovative efforts on analytical, numerical, and AI-based methods in scientific computing. Submissions showcasing the latest developments in AI-driven data analysis methods, AI-driven controlling and scheduling methods, AI-driven scientific discovery, and AI-driven calculation are welcome.

Dr. Jue Wang
Dr. Jidong Zhai
Dr. Weile Jia
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • scientific computing
  • scientific data analysis
  • mathematical techniques
  • mathematical applications

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 779 KiB  
Article
Traffic-Sign-Detection Algorithm Based on SK-EVC-YOLO
by Faguo Zhou, Huichang Zu, Yang Li, Yanan Song, Junbin Liao and Changshuo Zheng
Mathematics 2023, 11(18), 3873; https://doi.org/10.3390/math11183873 - 11 Sep 2023
Viewed by 894
Abstract
Traffic sign detection is an important research direction in the process of intelligent transportation in the Internet era, and plays a crucial role in ensuring traffic safety. The purpose of this research is to propose a traffic-sign-detection algorithm based on the selective kernel [...] Read more.
Traffic sign detection is an important research direction in the process of intelligent transportation in the Internet era, and plays a crucial role in ensuring traffic safety. The purpose of this research is to propose a traffic-sign-detection algorithm based on the selective kernel attention (SK attention), explicit visual center (EVC), and YOLOv5 model to address the problems of small targets, incomplete detection, and insufficient detection accuracy in natural and complex road situations. First, the feature map with a smaller receptive field in the backbone network is fused with other scale feature maps to increase the small target detection layer. Then, the SK attention mechanism is introduced to extract and weigh features at different scales and levels, enhancing the attention to the target. By fusing the explicit visual center to gather local area features within the layer, the detection effect of small targets is improved. According to the experiment results, the mean average precision (mAP) on the Tsinghua-Tencent Traffic Sign Dataset (TT100K) for the proposed algorithm is 88.5%, which is 4.6% higher than the original model, demonstrating the practicality of the detection of small traffic signs. Full article
Show Figures

Figure 1

18 pages, 10506 KiB  
Article
Recognition of Plasma-Treated Rice Based on 3D Deep Residual Network with Attention Mechanism
by Xiaojiang Tang, Wenhao Zhao, Junwei Guo, Baoxia Li, Xin Liu, Yuan Wang and Feng Huang
Mathematics 2023, 11(7), 1686; https://doi.org/10.3390/math11071686 - 31 Mar 2023
Cited by 1 | Viewed by 938
Abstract
Low-temperature plasma is a new agricultural green technology, which can improve the yield and quality of rice. How to identify the harvest rice grown by plasma seed treatment plays an important role in the popularization and application of low-temperature plasma in agriculture. This [...] Read more.
Low-temperature plasma is a new agricultural green technology, which can improve the yield and quality of rice. How to identify the harvest rice grown by plasma seed treatment plays an important role in the popularization and application of low-temperature plasma in agriculture. This study collected hyperspectral data of harvest rice, including plasma seed treated rice, and constructed a recognition model based on the hyperspectral image (HSI) by 3D ResNet (HSI-3DResNet), which extracts spatial spectral features of HSI data cubes through 3D convolution. In addition, a spectral channels 3D attention module (C3DAM) is proposed, which can extract key features of spectra. Experiments showed that the proposed C3DAM can improve the recognition accuracy of the model to 4.2%, while the size and parameters of the model only increase by 4.1% and 3.8%, respectively. The HSI-3DResNet proposed in this study is superior to other methods with the overall accuracy of 97.47%. At the same time, the algorithm proposed in this paper was also verified on a public dataset. Full article
Show Figures

Figure 1

10 pages, 659 KiB  
Article
A Real Neural Network State for Quantum Chemistry
by Yangjun Wu, Xiansong Xu, Dario Poletti, Yi Fan, Chu Guo and Honghui Shang
Mathematics 2023, 11(6), 1417; https://doi.org/10.3390/math11061417 - 15 Mar 2023
Cited by 1 | Viewed by 1271
Abstract
The restricted Boltzmann machine (RBM) has recently been demonstrated as a useful tool to solve the quantum many-body problems. In this work we propose tanh-FCN, which is a single-layer fully connected neural network adapted from RBM, to study ab initio quantum chemistry problems. [...] Read more.
The restricted Boltzmann machine (RBM) has recently been demonstrated as a useful tool to solve the quantum many-body problems. In this work we propose tanh-FCN, which is a single-layer fully connected neural network adapted from RBM, to study ab initio quantum chemistry problems. Our contribution is two-fold: (1) our neural network only uses real numbers to represent the real electronic wave function, while we obtain comparable precision to RBM for various prototypical molecules; (2) we show that the knowledge of the Hartree-Fock reference state can be used to systematically accelerate the convergence of the variational Monte Carlo algorithm as well as to increase the precision of the final energy. Full article
Show Figures

Figure 1

21 pages, 945 KiB  
Article
Adaptive Distributed Parallel Training Method for a Deep Learning Model Based on Dynamic Critical Paths of DAG
by Yan Zeng, Wei Wang, Yong Ding, Jilin Zhang, Yongjian Ren and Guangzheng Yi
Mathematics 2022, 10(24), 4788; https://doi.org/10.3390/math10244788 - 16 Dec 2022
Cited by 1 | Viewed by 1542
Abstract
AI provides a new method for massive simulated data calculations in molecular dynamics, materials, and other scientific computing fields. However, the complex structures and large-scale parameters of neural network models make them difficult to develop and train. The automatic parallel technology based on [...] Read more.
AI provides a new method for massive simulated data calculations in molecular dynamics, materials, and other scientific computing fields. However, the complex structures and large-scale parameters of neural network models make them difficult to develop and train. The automatic parallel technology based on graph algorithms is one of the most promising methods to solve this problem, despite the low efficiency in the design, implementation, and execution of distributed parallel policies for large-scale neural network models. In this paper, we propose an adaptive distributed parallel training method based on the dynamic generation of critical DAG (directed acyclic graph) paths, called FD-DPS, to solve this efficiency problem. Firstly, the proposed model splits operators with the dimension of the tensor, which can expand the space available for model parallelism. Secondly, a dynamic critical path generation method is employed to determine node priority changes in the DAG of the neural network models. Finally, the model implements the optimal scheduling of critical paths based on the priority of the nodes, thereby improving the performance of parallel strategies. Our experiments show that FD-DPS can achieve 12.76% and 11.78% faster training on PnasNet_mobile and ResNet_200 models, respectively, compared with the MP-DPS and Fast methods. Full article
Show Figures

Figure 1

26 pages, 2683 KiB  
Article
Elastic Information Bottleneck
by Yuyan Ni, Yanyan Lan, Ao Liu and Zhiming Ma
Mathematics 2022, 10(18), 3352; https://doi.org/10.3390/math10183352 - 15 Sep 2022
Cited by 1 | Viewed by 1360
Abstract
Information bottleneck is an information-theoretic principle of representation learning that aims to learn a maximally compressed representation that preserves as much information about labels as possible. Under this principle, two different methods have been proposed, i.e., information bottleneck (IB) and deterministic information bottleneck [...] Read more.
Information bottleneck is an information-theoretic principle of representation learning that aims to learn a maximally compressed representation that preserves as much information about labels as possible. Under this principle, two different methods have been proposed, i.e., information bottleneck (IB) and deterministic information bottleneck (DIB), and have gained significant progress in explaining the representation mechanisms of deep learning algorithms. However, these theoretical and empirical successes are only valid with the assumption that training and test data are drawn from the same distribution, which is clearly not satisfied in many real-world applications. In this paper, we study their generalization abilities within a transfer learning scenario, where the target error could be decomposed into three components, i.e., source empirical error, source generalization gap (SG), and representation discrepancy (RD). Comparing IB and DIB on these terms, we prove that DIB’s SG bound is tighter than IB’s while DIB’s RD is larger than IB’s. Therefore, it is difficult to tell which one is better. To balance the trade-off between SG and the RD, we propose an elastic information bottleneck (EIB) to interpolate between the IB and DIB regularizers, which guarantees a Pareto frontier within the IB framework. Additionally, simulations and real data experiments show that EIB has the ability to achieve better domain adaptation results than IB and DIB, which validates the correctness of our theories. Full article
Show Figures

Figure 1

Back to TopTop