Applications of Deep Learning: Emerging Technologies and Challenges

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (30 April 2023) | Viewed by 3321

Special Issue Editor


E-Mail Website
Guest Editor
School of Electrical Engineering, Vellore Institute of Technology, Vellore 632014, India
Interests: multi-level inverter; high gain converter; multi-onput multi-output converter; bidirectional DC-DC converter for electric vehicle application; IoT based electric drives
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Deep learning is an emerging technology in all fields, from medical studies to all areas of engineering. Among the domains where deep learning is frequently applied are computer vision, natural language processing, self-driving cars, control systems, robotics, and medical and solving complex problems, such as finding new drugs and identifying the gene sequence. The ability to extract underlying principles without human intelligence makes it an attractive alternative to conventional methods. However, improving the performance of the deep learning models required more data and high-performance computational resources. Moreover, real-time deployment is challenging due to high computational overheads and power-intensive hardware resources. The design of more efficient and less power-hungry deep learning models with optimal hardware (FPGA, ASIC, and GPU) and software resources is an active area of research.

This Special Issue intends to showcase outstanding breakthrough works using deep learning in medical, remote sensing, robotics, control systems, modern 5G, IoT, natural language processing, agriculture applications, and any other applications based on deep learning methodology. Hardware accelerators (GPU, FPGA, ASIC) and software optimizers that are useful in implementing deep learning techniques are considered. Further, review articles and survey papers on modern trends in the various fields applying deep learning methods will also be permitted.

Dr. Dhanamjayulu Chittathuru
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • reinforcement learning
  • convolutions neural networks
  • artificial intelligence
  • medical imaging
  • IoT and 5G technologies
  • natural language processing

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 2203 KiB  
Article
Linguistic Features and Bi-LSTM for Identification of Fake News
by Attar Ahmed Ali, Shahzad Latif, Sajjad A. Ghauri, Oh-Young Song, Aaqif Afzaal Abbasi and Arif Jamal Malik
Electronics 2023, 12(13), 2942; https://doi.org/10.3390/electronics12132942 - 04 Jul 2023
Cited by 1 | Viewed by 1558
Abstract
With the spread of Internet technologies, the use of social media has increased exponentially. Although social media has many benefits, it has become the primary source of disinformation or fake news. The spread of fake news is creating many societal and economic issues. [...] Read more.
With the spread of Internet technologies, the use of social media has increased exponentially. Although social media has many benefits, it has become the primary source of disinformation or fake news. The spread of fake news is creating many societal and economic issues. It has become very critical to develop an effective method to detect fake news so that it can be stopped, removed or flagged before spreading. To address the challenge of accurately detecting fake news, this paper proposes a solution called Statistical Word Embedding over Linguistic Features via Deep Learning (SWELDL Fake), which utilizes deep learning techniques to improve accuracy. The proposed model implements a statistical method called “principal component analysis” (PCA) on fake news textual representations to identify significant features that can help identify fake news. In addition, word embedding is employed to comprehend linguistic features and Bidirectional Long Short-Term Memory (Bi-LSTM) is utilized to classify news as true or fake. We used a benchmark dataset called SWELDL Fake to validate our proposed model, which has about 72,000 news articles collected from different benchmark datasets. Our model achieved a classification accuracy of 98.52% on fake news, surpassing the performance of state-of-the-art deep learning and machine learning models. Full article
(This article belongs to the Special Issue Applications of Deep Learning: Emerging Technologies and Challenges)
Show Figures

Figure 1

15 pages, 744 KiB  
Article
Improving Code Completion by Solving Data Inconsistencies in the Source Code with a Hierarchical Language Model
by Yixiao Yang
Electronics 2023, 12(7), 1576; https://doi.org/10.3390/electronics12071576 - 27 Mar 2023
Viewed by 902
Abstract
In the field of software engineering, applying language models to the token sequence of source code is the state-of-the-art approach to building a code recommendation system. When applying language models to source code, it is difficult for state-of-the-art language models to deal with [...] Read more.
In the field of software engineering, applying language models to the token sequence of source code is the state-of-the-art approach to building a code recommendation system. When applying language models to source code, it is difficult for state-of-the-art language models to deal with the data inconsistency problem, which is caused by the free naming conventions of source code. It is common for user-defined variables or methods with similar semantics in source code, to have different names in different projects. This means that a model trained on one project may encounter many words the model has never seen before during another project. Those freely named variables or functions in the code will bring difficulties to the processes of training and prediction and cause a data inconsistency problem between projects. However, we discover that the syntax tree of source code has hierarchical structures. This code structure has strong regularity in different projects and can be used to combat data inconsistency. In this paper, we propose a novel Hierarchical Language Model (HLM) to improve the robustness of the state-of-the-art recurrent language model, in order to be able to deal with data inconsistency between training and testing. The newly proposed HLM takes the hierarchical structure of the code tree into consideration to predict code. The proposed HLM method generates the embedding for each sub-tree according to hierarchies and collects the embedding of each sub-tree in context, to predict the next piece of code. The experiments on inner-project and cross-project datasets indicate that the newly proposed HLM method performs better than the state-of-the-art recurrent language model in dealing with the data inconsistency between training and testing, and achieves an average improvement in prediction accuracy of 11.2%. Full article
(This article belongs to the Special Issue Applications of Deep Learning: Emerging Technologies and Challenges)
Show Figures

Figure 1

Back to TopTop