Resource Management for Emerging Computing Systems

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 31 August 2024 | Viewed by 3410

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Engineering, Ewha University, Seoul 03760, Republic of Korea
Interests: operating system; real-time system; memory & storage management; embedded systems; system optimizations
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Engineering, Ewha Womans University, Seoul, Republic of Korea
Interests: multimedia systems; cloud computing; real-time systems; embedded systems; operating systems

E-Mail Website
Guest Editor
School of Computer Science and Engineering, Pusan National University, Busan 46241, Republic of Korea
Interests: operating system; cloud platform; non-volatile memory storage; non-block based storage
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Emerging computing systems differ from traditional systems in both hardware and software aspects. First, from the hardware point of view, emerging computing systems are equipped with high-performance SSDs (Solid State Drives), PM (Persistent Memory), and various types of computing resources (GPGPU, NPU, etc.) for the efficient processing of memory and computation intensive workloads. In addition, emerging computing systems are differentiated from traditional systems in that they can utilize various remote resources such as cloud and edge servers by making use of offloading and migration techniques rather than being stand-alone.

On the software side of emerging computing systems, as various types of AI (artificial intelligence) and ML (machine learning) techniques are incorporated into the design of software, the resource usage behavior of processors, memory, and storage is different from that of traditional software. In particular, workloads such as autonomous driving and smart factories require large memory footprints and long computation processes, and at the same time, there are strict time constraints for real-time systems. However, the locality of data access is not strong, degrading the effectiveness of traditional resource management techniques such as caching.

The potential topics of this Special Issue include, but are not limited to, resource management that reflects the behavior of emerging workloads under the new hardware characteristics of emerging computing systems, include DVFS (dynamic voltage/frequency scaling), task offloading, storage management, memory management, cloud resource management, mobile resource management, caching, scheduling, and ener-gy-saving techniques.

Prof. Dr. Hyokyung Bahn
Dr. Kyungwoon Cho
Dr. Sungyong Ahn
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • resource management
  • dynamic voltage/frequency scaling
  • task offloading
  • storage management
  • memory management
  • cloud resource management
  • real-time embedded systems
  • caching
  • scheduling
  • energy-saving technique

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

11 pages, 443 KiB  
Article
Data Placement Using a Classifier for SLC/QLC Hybrid SSDs
by Heeseong Cho and Taeseok Kim
Appl. Sci. 2024, 14(4), 1648; https://doi.org/10.3390/app14041648 - 18 Feb 2024
Viewed by 530
Abstract
In hybrid SSDs (solid-state drives) consisting of SLC (single-level cell) and QLC (quad-level cell), efficiently using the limited SLC cache space is crucial. In this paper, we present a practical data placement scheme, which determines the placement location of incoming write requests using [...] Read more.
In hybrid SSDs (solid-state drives) consisting of SLC (single-level cell) and QLC (quad-level cell), efficiently using the limited SLC cache space is crucial. In this paper, we present a practical data placement scheme, which determines the placement location of incoming write requests using a lightweight machine-learning model. It leverages information about I/O workload characteristics and SSD status to identify cold data that does not need to be stored in the SLC cache with high accuracy. By strategically bypassing the SLC cache for cold data, our scheme significantly reduces unnecessary data movements between the SLC and QLC regions, improving the overall efficiency of the SSD. Through simulation-based studies using real-world workloads, we demonstrate that our scheme outperforms existing approaches by up to 44%. Full article
(This article belongs to the Special Issue Resource Management for Emerging Computing Systems)
Show Figures

Figure 1

26 pages, 4196 KiB  
Article
Amphisbaena: A Novel Persistent Buffer Management Strategy to Improve SMR Disk Performance
by Chi Zhang, Fangxing Yu, Shiqiang Nie, Wei Tang, Fei Liu, Song Liu and Weiguo Wu
Appl. Sci. 2024, 14(2), 630; https://doi.org/10.3390/app14020630 - 11 Jan 2024
Viewed by 487
Abstract
The explosive growth of massive data makes shingled magnetic recording (SMR) disks a promising candidate for balancing capacity and cost. SMR disks are typically configured with a persistent buffer to reduce the read–modify–write (RMW) overhead introduced by non-sequential writes. Traditional SMR zones-based persistent [...] Read more.
The explosive growth of massive data makes shingled magnetic recording (SMR) disks a promising candidate for balancing capacity and cost. SMR disks are typically configured with a persistent buffer to reduce the read–modify–write (RMW) overhead introduced by non-sequential writes. Traditional SMR zones-based persistent buffers are subject to sequential-write constraints, and frequent cleanups cause disk performance degradation. Conventional magnetic recording (CMR) zones with in-place update capabilities enable less frequent cleanups and are gradually being used to construct persistent buffers in certain SMR disks. However, existing CMR zones-based persistent buffer designs fail to accurately capture hot blocks with long update periods and are limited by an inflexible data layout, resulting in inefficient cleanups. To address the above issues, we propose a strategy called Amphisbaena. First, a two-phase data block classification method is proposed to capture frequently updated blocks. Then, a locality-aware buffer space management scheme is developed to dynamically manage blocks with different update frequencies. Finally, a latency-sensitive garbage collection policy based on the above is designed to mitigate the impact of cleanup on user requests. Experimental results show that Amphisbaena reduces latency by an average of 29.9% and the number of RMWs by an average of 37% compared to current state-of-the-art strategies. Full article
(This article belongs to the Special Issue Resource Management for Emerging Computing Systems)
Show Figures

Figure 1

15 pages, 2385 KiB  
Article
Performance Analysis of Container Effect in Deep Learning Workloads and Implications
by Soyeon Park and Hyokyung Bahn
Appl. Sci. 2023, 13(21), 11654; https://doi.org/10.3390/app132111654 - 25 Oct 2023
Viewed by 1029
Abstract
Container-based deep learning has emerged as a cutting-edge trend in modern AI applications. Containers have several merits compared to traditional virtual machine platforms in terms of resource utilization and mobility. Nevertheless, containers still pose challenges in executing deep learning workloads efficiently with respect [...] Read more.
Container-based deep learning has emerged as a cutting-edge trend in modern AI applications. Containers have several merits compared to traditional virtual machine platforms in terms of resource utilization and mobility. Nevertheless, containers still pose challenges in executing deep learning workloads efficiently with respect to resource usage and performance. In particular, multi-tenant environments are vulnerable to the performance of container-based deep learning due to conflicts of resource usage. To quantify the container effect in deep learning, this article captures various event traces related to deep learning performance using containers and compares them with those captured on a host machine without containers. By analyzing the system calls invoked and various performance metrics, we quantify the effect of containers in terms of resource consumption and interference. We also explore the effects of executing multiple containers to highlight the issues that arise in multi-tenant environments. Our observations show that containerization can be a viable solution for deep learning workloads, but it is important to manage resources carefully to avoid excessive contention and interference, especially for storage write-back operations. We also suggest a preliminary solution to avoid the performance bottlenecks of page-faults and storage write-backs by introducing an intermediate non-volatile flushing layer, which improves I/O latency by 82% on average. Full article
(This article belongs to the Special Issue Resource Management for Emerging Computing Systems)
Show Figures

Figure 1

19 pages, 2067 KiB  
Article
Balloon: An Elastic Data Management Strategy for Interlaced Magnetic Recording
by Chi Zhang, Song Liu, Fangxing Yu, Menghan Li, Wei Tang, Fei Liu and Weiguo Wu
Appl. Sci. 2023, 13(17), 9767; https://doi.org/10.3390/app13179767 - 29 Aug 2023
Viewed by 765
Abstract
Recently, the emerging technology known as Interlaced Magnetic Recording (IMR) has been receiving widespread attention from both industry and academia. IMR-based disks incorporate interlaced track layouts and energy-assisted techniques to dramatically increase areal densities. The interlaced track layout means that in-place updates to [...] Read more.
Recently, the emerging technology known as Interlaced Magnetic Recording (IMR) has been receiving widespread attention from both industry and academia. IMR-based disks incorporate interlaced track layouts and energy-assisted techniques to dramatically increase areal densities. The interlaced track layout means that in-place updates to the bottom track require rewriting the adjacent top track to ensure data consistency. However, at high disk utilization, frequent track rewrites degrade disk performance. To address this problem, we propose a solution called Balloon to reduce the frequency of track rewrites. First, an adaptive write interference data placement policy is introduced, which judiciously places data on tracks with low rewrite probability to avoid unnecessary rewrites. Next, an on-demand data shuffling mechanism is designed to reduce user-requests write latency by implicitly migrating data and promptly swapping tracks with high update block coverage to the top track. Finally, a write-interference-free persistent buffer design is proposed. This design dynamically adjusts buffer admission constraints and selectively evicts data blocks to improve the cooperation between data placement and data shuffling. Evaluation results show that Balloon significantly improves the write performance of IMR-based disks at medium and high utilization compared with state-of-the-art studies. Full article
(This article belongs to the Special Issue Resource Management for Emerging Computing Systems)
Show Figures

Figure 1

Back to TopTop