Cloud Computing and Big Data Systems: New Trends and Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 July 2024 | Viewed by 3605

Special Issue Editors

Department of Computer Science and Technology, Nanjing University, Nanjing 210023, China
Interests: big data systems; cloud computing platforms; data cache and index
School of Information & Electronic Engineering, Zhejiang Gongshang University, Hangzhou 310018, China
Interests: cloud computing; cloud security

Special Issue Information

Dear Colleagues,

The world has entered the data revolution era. Along with the development of IT technologies such as the internet, mobile internet, internet of things (IoT), artificial intelligence (AI), and the meta-universe, the amount of data generated, stored, processed, and analyzed today is growing exponentially. Over the past decade, we have witnessed a great deal of innovation in the big data processing and cloud resource management stack, including famous big data computation, storage and resource management systems such as Apache Hadoop, Apache Spark, Alluxio, and Apache Mesos, etc. Nowadays, the quantity and complexity of the data continue to increase. In addition, novel big data applications call for the use of less cloud resources and to guarantee platform security during big data processing. This has presented tremendous challenges and opportunities for the research and industrial communities.

To address these challenges, this Special Issue entitled “Cloud Computing and Big Data Systems: New Trends and Applications” welcomes scholars, experts, researchers, and engineers in related areas to submit their contributions, providing new solutions and ideas. The topics of interest include, but are not limited to:

  • Novel big data storage and management systems and technologies.
  • Novel big data computing frameworks and systems.
  • Novel cloud computing and resource scheduling technologies.
  • Novel data privacy or security technologies for big data and cloud systems.
  • Novel big data and cloud computing applications.

We look forward to receiving your contributions.

Dr. Rong Gu
Prof. Dr. Mande Xie
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • cloud computing
  • big data processing systems
  • big data storage systems
  • big data applications
  • big data/cloud security

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 1344 KiB  
Article
Optimizing Data Processing: A Comparative Study of Big Data Platforms in Edge, Fog, and Cloud Layers
by Thanda Shwe and Masayoshi Aritsugi
Appl. Sci. 2024, 14(1), 452; https://doi.org/10.3390/app14010452 - 04 Jan 2024
Viewed by 885
Abstract
Intelligent applications in several areas increasingly rely on big data solutions to improve their efficiency, but the processing and management of big data incur high costs. Although cloud-computing-based big data management and processing offer a promising solution to provide scalable and abundant resources, [...] Read more.
Intelligent applications in several areas increasingly rely on big data solutions to improve their efficiency, but the processing and management of big data incur high costs. Although cloud-computing-based big data management and processing offer a promising solution to provide scalable and abundant resources, the current cloud-based big data management platforms do not properly address the high latency, privacy, and bandwidth consumption challenges that arise when sending large volumes of user data to the cloud. Computing in the edge and fog layers is quickly emerging as an extension of cloud computing used to reduce latency and bandwidth consumption, resulting in some of the processing tasks being performed in edge/fog-layer devices. Although these devices are resource-constrained, recent increases in resource capacity provide the potential for collaborative big data processing. We investigated the deployment of data processing platforms based on three different computing paradigms, namely batch processing, stream processing, and function processing, by aggregating the processing power from a diverse set of nodes in the local area. Herein, we demonstrate the efficacy and viability of edge-/fog-layer big data processing across a variety of real-world applications and in comparison to the cloud-native approach in terms of performance. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Systems: New Trends and Applications)
Show Figures

Figure 1

14 pages, 1571 KiB  
Article
Benchmarking GPU Tensor Cores on General Matrix Multiplication Kernels through CUTLASS
by Xuanteng Huang, Xianwei Zhang, Panfei Yang and Nong Xiao
Appl. Sci. 2023, 13(24), 13022; https://doi.org/10.3390/app132413022 - 06 Dec 2023
Viewed by 1381
Abstract
GPUs have been broadly used to accelerate big data analytics, scientific computing and machine intelligence. Particularly, matrix multiplication and convolution are two principal operations that use a large proportion of steps in modern data analysis and deep neural networks. These performance-critical operations are [...] Read more.
GPUs have been broadly used to accelerate big data analytics, scientific computing and machine intelligence. Particularly, matrix multiplication and convolution are two principal operations that use a large proportion of steps in modern data analysis and deep neural networks. These performance-critical operations are often offloaded to the GPU to obtain substantial improvements in end-to-end latency. In addition, multifarious workload characteristics and complicated processing phases in big data demand a customizable yet performant operator library. To this end, GPU vendors, including NVIDIA and AMD, have proposed template and composable GPU operator libraries to conduct specific computations on certain types of low-precision data elements. We formalize a set of benchmarks via CUTLASS, NVIDIA’s templated library that provides high-performance and hierarchically designed kernels. The benchmarking results show that, with the necessary fine tuning, hardware-level ASICs like tensor cores could dramatically boost performance in specific operations like GEMM offloading to modern GPUs. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Systems: New Trends and Applications)
Show Figures

Figure 1

21 pages, 7772 KiB  
Article
Bespoke Virtual Machine Orchestrator: An Approach for Constructing and Reconfiguring Bespoke Virtual Machine in Private Cloud Environment
by Joonseok Park, Sumin Jeong and Keunhyuk Yeom
Appl. Sci. 2023, 13(16), 9161; https://doi.org/10.3390/app13169161 - 11 Aug 2023
Viewed by 664
Abstract
A cloud-computing company or user must create a virtual machine to build and operate a cloud environment. With the growth of cloud computing, it is necessary to build virtual machines that reflect the needs of both companies and users. In this study, we [...] Read more.
A cloud-computing company or user must create a virtual machine to build and operate a cloud environment. With the growth of cloud computing, it is necessary to build virtual machines that reflect the needs of both companies and users. In this study, we propose a bespoke virtual machine orchestrator (BVMO) as a method for constructing a virtual machine. The BVMO builds resource volumes as core assets to meet user requirements and builds virtual machines by reusing and combining these resource volumes. This can increase the reusability and flexibility of virtual-machine construction. A case study was conducted to build a virtual machine by applying the proposed BVMO to an actual OpenStack cloud platform, and it was confirmed that the construction time of the virtual machine was reduced compared with that of the existing method. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Systems: New Trends and Applications)
Show Figures

Figure 1

Back to TopTop