Emerging Memory Technologies for Next-Generation Applications

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (25 December 2021) | Viewed by 7483

Special Issue Editors

Department of Smart Systems Software, Soongsil University, 369 Sando-ro, Dongjak-gu, Seoul 06978, Republic of Korea
Interests: data management systems; big data analysis and mining; file and storage systems; emerging memory technologies; cloud computing
Department of Software Engineering, Gyeongsang National University, Jinjusi 52828, Republic of Korea
Interests: storage system; concurrency; operating system; computer architecture
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Engineering, Kwangwoon University, Nowon-gu, Seoul 01897, Korea
Interests: operating systems; file systems; embedded systems; storage systems; flash memory; Linux; Android; RTOS; mobile computing; cloud computing; edge computing; multimedia systems; browser; data science; machine learning

Special Issue Information

Dear Colleagues,

With the explosive increase in data-centric applications, the demand for memory and storage capacity continues to increase. In response to this demand, memory technologies are evolving rapidly from semiconductor devices to the upper software layers. High-density memories such as 3D-xpoint have been released to accommodate a large memory footprint of modern applications, and scalable distributed memory systems are growing in popularity with the exponentially increasing data volume. In addition, in-memory/in-storage processing techniques are actively explored to accelerate the performance of data-oriented applications by reducing the data movement for computation. Moreover, to flexibly cope with various needs of the applications, heterogeneous memory systems where different memory devices in terms of performance and cost work in cooperation are also gaining attention.

In this context, this Special Issue aims to highlight emerging memory technologies suited to the demands of the next-generation applications. Potential topics include but are not limited to the following:

  • Memory and storage optimization for AI/ML applications
  • Processing in memory (PIM)/In-storage processing technologies
  • Energy-efficient memory/storage management
  • System software for emerging memory technologies
  • Large-scale/heterogeneous/disaggregated memory systems
  • Persistent memory and storage
  • Memory interfaces for emerging devices
  • Workload analysis and benchmarking for emerging memories

Prof. Eunji Lee
Prof. Jaeho Kim
Prof. Taeseok Kim
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Memory and storage optimization for AI/ML applications
  • Processing in memory (PIM)/In-storage processing technologies
  • Energy-efficient memory/storage management
  • System software for emerging memory technologies
  • Large-scale/heterogeneous/disaggregated memory systems
  • Persistent memory and storage
  • Memory interfaces for emerging devices
  • Workload analysis and benchmarking for emerging memories

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 1983 KiB  
Article
Selective Power-Loss-Protection Method for Write Buffer in ZNS SSDs
by Junseok Yang, Seokjun Lee and Sungyong Ahn
Electronics 2022, 11(7), 1086; https://doi.org/10.3390/electronics11071086 - 30 Mar 2022
Cited by 1 | Viewed by 2519
Abstract
Most SSDs (solid-state drives) use an internal DRAM (Dynamic Random Access Memory) to improve the I/O performance and extend SSD lifespan by absorbing write requests. However, this volatile memory does not guarantee the persistence of buffered data in the event of sudden power-off. [...] Read more.
Most SSDs (solid-state drives) use an internal DRAM (Dynamic Random Access Memory) to improve the I/O performance and extend SSD lifespan by absorbing write requests. However, this volatile memory does not guarantee the persistence of buffered data in the event of sudden power-off. Therefore, highly reliable enterprise SSDs employ power-loss-protection (PLP) logic to ensure the durability of buffered data using the back-up power of capacitors. The SSD must provide enough capacitors for the PLP in proportion to the size of the volatile buffer. Meanwhile, emerging ZNS (Zoned Namespace) SSDs are attracting attention because they can support many I/O streams that are useful in multi-tenant systems. Although ZNS SSDs do not use an internal mapping table unlike conventional block-interface SSDs, a large write buffer is required to provide many I/O streams. The reason is that each I/O stream needs its own write buffer for write buffering where the host can allocate separate zones to different I/O streams. Moreover, the larger capacity and more I/O streams the ZNS SSD supports, the larger write buffer is required. However, the size of the write buffer depends on the amount of capacitance, which is limited not only by the SSD internal space, but also by the cost. Therefore, in this paper, we present a set of techniques that significantly reduce the amount of capacitance required in ZNS SSDs, while ensuring the durability of buffered data during sudden power-off. First, we note that modern file systems or databases have their own solutions for data recovery, such as WAL (Write-ahead Log) and journal. Therefore, we propose a selective power-loss-protection method that ensures durability only for the WAL or journal required for data recovery, not for the entire buffered data. Second, to minimize the time taken by the PLP, we propose a balanced flush method that temporarily writes buffered data to multiple zones to maximize parallelism and preserves the data in its original location when power is restored. The proposed methods are implemented and evaluated by modifying FEMU (QEMU-based Flash Emulator) and RocksDB. According to experimental results, the proposed selective-PLP reduces the amount of capacitance by 50 to 90% while retaining the reliability of ZNS SSDs. In addition, the balanced flush method reduces the PLP latency by up to 96%. Full article
(This article belongs to the Special Issue Emerging Memory Technologies for Next-Generation Applications)
Show Figures

Figure 1

15 pages, 1736 KiB  
Article
Differentiated Protection and Hot/Cold-Aware Data Placement Policies through k-Means Clustering Analysis for 3D-NAND SSDs
by Seungwoo Son and Jaeho Kim
Electronics 2022, 11(3), 398; https://doi.org/10.3390/electronics11030398 - 28 Jan 2022
Cited by 1 | Viewed by 2463
Abstract
3D-NAND flash memory provides high capacity per unit area by stacking 2D-NAND cells having a planar structure. However, because of the nature of the lamination process, the frequency of error occurrence varies depending on each layer or physical cell location. This phenomenon becomes [...] Read more.
3D-NAND flash memory provides high capacity per unit area by stacking 2D-NAND cells having a planar structure. However, because of the nature of the lamination process, the frequency of error occurrence varies depending on each layer or physical cell location. This phenomenon becomes more pronounced as the number of flash memory write/erase (Program/Erasure) operations increases. Error correction code (ECC) is used for error correction in the majority of flash-based storage devices, such as SSDs (Solid State Drive). As this method provides a constant level of data protection for all-flash memory pages, there is a limitation in 3D-NAND flash memory, where the error rate varies depending on physical location. Consequently, in this paper, pages and layers with varying error rates are classified into clusters using the k-means machine-learning algorithm, and each cluster is assigned a different level of data protection strength. We classify pages and layers based on the number of error occurrences measured at the end of the endurance test, and for areas vulnerable to errors, it is shown as an example of providing differentiated data protection strength by adding parity data to the stripe. Furthermore, areas vulnerable to retention errors are identified based on retention error rates, and bit error rates are significantly reduced through our hot/cold-aware data placement policy. We show that the proposed differential data protection and hot/cold-aware data placement policies improve the reliability and lifespan of 3D-NAND flash memory compared with the existing ECC- or RAID-type data protection scheme. Full article
(This article belongs to the Special Issue Emerging Memory Technologies for Next-Generation Applications)
Show Figures

Figure 1

15 pages, 3161 KiB  
Article
UHNVM: A Universal Heterogeneous Cache Design with Non-Volatile Memory
by Xiaochang Li and Zhengjun Zhai
Electronics 2021, 10(15), 1760; https://doi.org/10.3390/electronics10151760 - 22 Jul 2021
Viewed by 1651
Abstract
During the recent decades, non-volatile memory (NVM) has been anticipated to scale up the main memory size, improve the performance of applications, and reduce the speed gap between main memory and storage devices, while supporting persistent storage to cope with power outages. However, [...] Read more.
During the recent decades, non-volatile memory (NVM) has been anticipated to scale up the main memory size, improve the performance of applications, and reduce the speed gap between main memory and storage devices, while supporting persistent storage to cope with power outages. However, to fit NVM, all existing DRAM-based applications have to be rewritten by developers. Therefore, the developer must have a good understanding of targeted application codes, so as to manually distinguish and store data fit for NVM. In order to intelligently facilitate NVM deployment for existing legacy applications, we propose a universal heterogeneous cache hierarchy which is able to automatically select and store the appropriate data of applications for non-volatile memory (UHNVM), without compulsory code understanding. In this article, a program context (PC) technique is proposed in the user space to help UHNVM to classify data. Comparing to the conventional hot or cold files categories, the PC technique can categorize application data in a fine-grained manner, enabling us to store them either in NVM or SSDs efficiently for better performance. Our experimental results using a real Optane dual-inline-memory-module (DIMM) card show that our new heterogeneous architecture reduces elapsed times by about 11% compared to the conventional kernel memory configuration without NVM. Full article
(This article belongs to the Special Issue Emerging Memory Technologies for Next-Generation Applications)
Show Figures

Figure 1

Back to TopTop