In-Memory Computing and Its Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 November 2023) | Viewed by 3018

Special Issue Editors


E-Mail Website
Guest Editor
Department of Industrial Management, National Taiwan University of Science and Technology, Taipei 106335, Taiwan
Interests: database management system; system development; information security; statistical analysis

E-Mail Website
Guest Editor
Department of Mechanical and Automation Engineering & Graduate Program of Industrial Design, National Kaohsiung University of Science and Technology, Kaohsiung, Taiwan
Interests: artificial intelligence; multi-object optimization technology; airline crew rostering and production scheduling technology; supply chain management technology

E-Mail Website
Guest Editor
Department of Medical Informatics, Tzu Chi University, Hualien 97004, Taiwan
Interests: cryptography; medical information security; wireless network; network security; sensor networks and HIPAA privacy/security regulations
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In-memory computing refers to computing systems where the memory is placed as close as possible to the processing systems. In-memory computing, in its many forms, refers to the use of direct memory instead of disks for the purposes of storage and computation. An architecture based on in-memory computing has numerous benefits. This Special Issue seeks novel contributions in the form of high-quality, unpublished, and in-depth fundamental research that aims to address the existing technical problems and challenges in the domain of in-memory computing and its applications.

The goal of this Special Issue is to explore the existing architectures, models, and techniques and to integrate new technologies, focusing on performance evaluation and comparison with existing solutions in the field of in-memory computing and its applications. This Special Issue encourages both theoretical and experimental studies of in-memory computing and its applications. Furthermore, high-quality review and survey papers are welcome. Academic researchers, developers and industry practitioners are invited to contribute papers to this Special Issue. Topics of interest include (but are not limited to) one or more of the following topics:

  • In-memory data management
  • In-memory processing with multi-cores/many-cores
  • Programming framework and compiling skills for in-memory computing
  • In-memory computing applications and architectures;
  • System software support for in-memory computing
  • Big data applications with in-memory computing
  • In-memory computing for deep learning and neural networks
  • Security, privacy, and trust issues for in-memory computing, etc.

Dr. I-Pin Chang
Prof. Dr. Tung-Kuan Liu
Dr. Tian-Fu Lee
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • big data
  • security
  • in-memory computing
  • in-memory computing architectures
  • in-memory computing applications
  • in-memory data management
  • deep learning
  • neural networks

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 432 KiB  
Article
ESL: A High-Performance Skiplist with Express Lane
by Yedam Na, Bonmoo Koo, Taeyoon Park, Jonghyeok Park and Wook-Hee Kim
Appl. Sci. 2023, 13(17), 9925; https://doi.org/10.3390/app13179925 - 01 Sep 2023
Viewed by 906
Abstract
With the increasing capacity and cost-efficiency of DRAM in multi-core environments, in-memory databases have emerged as fundamental solutions for delivering high performance. The index structure is a crucial component of the in-memory database, which, leveraging fast access to DRAM, plays an important role [...] Read more.
With the increasing capacity and cost-efficiency of DRAM in multi-core environments, in-memory databases have emerged as fundamental solutions for delivering high performance. The index structure is a crucial component of the in-memory database, which, leveraging fast access to DRAM, plays an important role in the performance improvement and scalability of in-memory databases. A skiplist is one of the most widely used in-memory index structures and it has been adopted by popular databases. However, skiplists suffer from poor performance due to their structural limitations. In this work, we propose ESL, a high-performance and scalable skiplist. ESL efficiently enhances the performance of traverse operations by optimizing index levels for the CPU cache. With CPU cache-optimized index levels, we synergistically leverage a combination of exponential and linear searches. In addition, ESL reduces synchronization overhead by updating the index levels asynchronously, while tolerating inconsistencies. In our YCSB evaluation, ESL improves throughput by up to 2.8× over other skiplists in high-level evaluations. ESL also shows lower tail latency than other skiplists by up to 35×. Also, ESL consistently shows higher throughput in our real-world workload evaluation. Full article
(This article belongs to the Special Issue In-Memory Computing and Its Applications)
Show Figures

Figure 1

15 pages, 4674 KiB  
Article
Multi-Scale Aggregation Residual Channel Attention Fusion Network for Single Image Deraining
by Jyun-Guo Wang and Cheng-Shiuan Wu
Appl. Sci. 2023, 13(4), 2709; https://doi.org/10.3390/app13042709 - 20 Feb 2023
Cited by 2 | Viewed by 1186
Abstract
Images captured on rainy days are prone to rain streaking on various scales. These images taken on a rainy day will be disturbed by rain streaks of varying degrees, resulting in degradation of image quality. This study sought to eliminate rain streaks from [...] Read more.
Images captured on rainy days are prone to rain streaking on various scales. These images taken on a rainy day will be disturbed by rain streaks of varying degrees, resulting in degradation of image quality. This study sought to eliminate rain streaks from images using a two-stage network architecture involving progressive multi-scale recovery and aggregation. The proposed multi-scale aggregation residual channel attention fusion network (MARCAFNet) uses kernels of various scales to recover details at various levels of granularity to enhance the robustness of the model to streaks of various sizes, densities, and shapes. When applied to benchmark datasets, the proposed method outperformed other state-of-the-art schemes in the restoration of image details without distorting the image structure. Full article
(This article belongs to the Special Issue In-Memory Computing and Its Applications)
Show Figures

Figure 1

Back to TopTop