Deep Learning and Neuromorphic Chip

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (31 January 2024) | Viewed by 3181

Special Issue Editors


E-Mail Website
Guest Editor
College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310058, China
Interests: neuromorphic chip; deep learning accelerator; non-volatile circuits; biomedical & biometrics; computer vision

E-Mail Website
Guest Editor
School of Micro-Nano Electronics, Zhejiang University, Hangzhou 311200, China
Interests: neurmorphic computing; CMOS technology; emerging memristors
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the exponential growth of data generated daily, the current computing from device scaling alone is no longer sufficient, which demands new materials, devices, algorithms, and architectures to be developed collaboratively to meet present and future computing needs. Neuromorphic chip can easily recognize, store, and calculate massive data brought by new technologies such as Big Data, artificial intelligence, and deep learning. In recent years, Apple, Microsoft, Qualcomm, IBM, and other global IT giants have been deployed in the field of neuromorphic chips and invested a lot of research and development costs. Neuromorphic chip is expected to be applied to various IT technologies, such as face recognition, speech recognition, robots, unmanned aerial vehicles, autonomous vehicles and wearable devices, as well as data mining. It is considered to be the next generation of core technologies to promote the fourth industrial revolution.

This Special Issue of Applied Science will explore academic and industrial research on all topics related to neuromorphic chips from materials, devices, circuits, algorithms, software, and hardware to application design. Topics of interest include, but are not limited to:

  • Device, circuit, architecture design;
  • Emerging materials for neuromorphic devices;
  • Artificial Intelligence and machine learning;
  • Deep learning algorithms and optimizations;
  • Emerging technologies for brain-inspired computing and communications;
  • Brain-machine interfaces;
  • Mapping algorithms;
  • On-chip learning and inference;
  • Application, computing models, and hardware architecture.

Dr. Kejie Huang
Dr. Yishu Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence algorithm
  • hardware for artificial intelligence
  • neuromorphic system
  • advanced memory device
  • architecture design

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

3 pages, 164 KiB  
Editorial
Foreword to the Special Issue on Deep Learning and Neuromorphic Chips
by Xuemeng Fan and Yishu Zhang
Appl. Sci. 2022, 12(21), 11189; https://doi.org/10.3390/app122111189 - 04 Nov 2022
Viewed by 1102
Abstract
With the advent of the Internet of Things and the era of big data, the ability of machine data processing to reach the level of human brain cognition and learning is an important goal in the field of Internet information technology, including cloud [...] Read more.
With the advent of the Internet of Things and the era of big data, the ability of machine data processing to reach the level of human brain cognition and learning is an important goal in the field of Internet information technology, including cloud computing, data mining, machine learning, and artificial intelligence (AI) [...] Full article
(This article belongs to the Special Issue Deep Learning and Neuromorphic Chip)

Research

Jump to: Editorial

14 pages, 2359 KiB  
Article
Research on a Service Load Prediction Method Based on VMD-GLRT
by Jin Zhang, Yiqi Huang, Yu Pi, Cheng Sun, Wangyang Cai and Yuanyuan Huang
Appl. Sci. 2023, 13(5), 3315; https://doi.org/10.3390/app13053315 - 05 Mar 2023
Cited by 1 | Viewed by 1409
Abstract
In this paper, a deep learning-based prediction model VMD-GLRT is proposed to address the accuracy problem of service load prediction. The VMD-GLRT model combines Variational Mode Decomposition (VMD) and GRU-LSTM. At the same time, the model incorporates residual networks and self-attentive mechanisms to [...] Read more.
In this paper, a deep learning-based prediction model VMD-GLRT is proposed to address the accuracy problem of service load prediction. The VMD-GLRT model combines Variational Mode Decomposition (VMD) and GRU-LSTM. At the same time, the model incorporates residual networks and self-attentive mechanisms to improve accuracy of the model. The VMD part decomposes the original time series into several intrinsic mode functions (IMFs) and a residual part. The other part uses a GRU-LSTM structure with ResNets and Self-Attention to learn the features of the IMF and the residual part. The model-building process focuses on three main aspects: Firstly, a mathematical model is constructed based on the data characteristics of the service workload. At the same time, VMD is used to decompose the input time series into multiple components to improve the efficiency of the model in extracting features from the data. Secondly, a long and short-term memory (LSTM) network unit is incorporated into the residual network, allowing the network to correct the predictions more accurately and improve the performance of the model. Finally, a self-focus mechanism is incorporated into the model, allowing the model to better capture features over long distances. This improves the dependence of the output vector on these features. To validate the performance of the model, experiences were conducted using open-source datasets. The experimental results were compared with other deep learning and statistical models, and it was found that the model proposed in this paper achieved improvements in mean absolute percentage error (MAPE). Full article
(This article belongs to the Special Issue Deep Learning and Neuromorphic Chip)
Show Figures

Figure 1

Back to TopTop