Cloud Computing and Networking 2019

A special issue of Future Internet (ISSN 1999-5903).

Deadline for manuscript submissions: closed (10 September 2019) | Viewed by 4123

Special Issue Editors


E-Mail Website
Guest Editor
1. Electrical and Computer Engineering Department, National Technical University of Athens, Athens, Greece
2. Department of Computer Engineering, School of Applied Technology, Technological Educational Institute of Peloponnese, Greece
Interests: cloud computing and networking; optical networks; wireless and sensor networks

E-Mail Website
Guest Editor
Department of Digital Systems, University of Piraeus, Pireas, Greece
Interests: data management; data analytics; cloud computing; edge computing; distributed systems; service-oriented architectures; 5G networks

Special Issue Information

Dear Colleagues,

Cloud computing has revolutionized the way applications and services are designed, developed and run, providing affordable access to computing and storage resources to the masses. Technologies supporting cloud computing are progressing from virtual machines to containers and to serverless computing. Today, the migration of applications to cloud datacenters (DCs), combined with the hyperscale size of DCs and their world-wide placement, has led to the proliferation of DC-oriented traffic. This traffic is served by inter-datacenter and intra-datacenter networking infrastructures. Several networking technologies and advances are required to support the volume and the dynamicity of the cloud DC-related traffic.

At the same time, a number of emerging applications for the Internet of Things (IoT), Smart Cities, autonomous vehicles (drones and cars), VR/AR, CCTV cameras and others, enabled by networking advances like 5G networks, constantly create huge amounts of data, posing high and hard requirements on the centralized cloud infrastructures, requiring low-latency and real-time processing. This pushes the computing and storage resources out of their centralized locations and close to the end user and device. This is materialized today in the form of edge and fog computing, where processing takes place on the devices or near the devices. The latter include mini-DCs deployed near highways, in buildings or in the central office of network operators, also serving 5G processing requirements. These resources (e.g., fog nodes) can also aggregate or pre-process data, pushing to the centralized cloud only the information required for further higher level analysis.

It is expected that in the future the number of devices and data inputs will scale from thousands to millions to billions, reaching Exascale dimensions, while the sensors and cameras onboard the various devices will increase in number and capabilities (e.g., resolution), further skyrocketing the volume of data produced. The challenges ahead cannot be handled by a single technology or a single domain (intra-datacenter, inter-datacenter, centralized cloud, distributed cloud/fog). Instead, synergies are required in the technologies, tools, techniques to be developed considering end-to-end operation and tackling issues related to resource deployment and orchestration, networking, security, application management and advanced analytics.

This Special Issue invites original research papers on new algorithms, protocols, architectures, technologies and solutions for future-generation cloud computing and cloud networking in the cloud and the edge/fog. Relevant topics include, but are not limited to:

- Optical Networks in Cloud Datacenter Networks

- Software Defined Networking in Cloud Datacenter Networks (including P4, protocols, etc.)

- Cloud and Edge/Fog Network Management

- Cloud, Edge, Fog Resource Orchestration

- Middleware and Middleboxes

- 5G and Edge and Fog Computing (Multi-Access Edge Computing)

- Big Data Management in the Edge/Fog and the Cloud

- Artificial Intelligence and Data Analytics in the Edge/Fog and the Cloud

- Serverless Computing in the Edge

- Basic Algorithmic Challenges (e.g., network slicing, resource allocation, prediction)

- Security, Privacy, and Confidentiality in the Cloud, Edge/Fog (e.g., blockchain)

- Peer-to-Peer Schemes in Edge/Fog

- End-To-End Architectures and Solutions

- Traffic Characterization and Engineering

- Disaster Recovery and Reliability

- Storage and Caching in the Edge/Fog and the Cloud

- Edge/Fog Computing and Storage Resources (generic resources, FPGA, GPUs)

- Standardization, Modelling and Interfaces

- Applications Utilizing Edge/Fog and Cloud Computing Technologies

- Business and Market Developments

Dr. Panagiotis Kokkinos
Prof. Dr. Dimosthenis Kyriazis
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Cloud, Edge
  • Fog
  • Networking
  • Technologies
  • Protocols
  • Algorithms

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 3081 KiB  
Article
A Dynamic Application-Partitioning Algorithm with Improved Offloading Mechanism for Fog Cloud Networks
by Adeel Abro, Zhongliang Deng, Kamran Ali Memon, Asif Ali Laghari, Khalid Hussain Mohammadani and Noor ul Ain
Future Internet 2019, 11(7), 141; https://doi.org/10.3390/fi11070141 - 28 Jun 2019
Cited by 9 | Viewed by 3670
Abstract
This paper aims to propose a new fog cloud architecture that performs a joint energy-efficient task assignment (JEETA). The proposed JEETA architecture utilizes the dynamic application-partitioning algorithm (DAPTS), a novel algorithm that efficiently decides and switches the task to be offloaded or not [...] Read more.
This paper aims to propose a new fog cloud architecture that performs a joint energy-efficient task assignment (JEETA). The proposed JEETA architecture utilizes the dynamic application-partitioning algorithm (DAPTS), a novel algorithm that efficiently decides and switches the task to be offloaded or not in heterogeneous environments with minimal energy consumption. The proposed scheme outperforms baseline approaches such as MAUI, Think Air and Clone Cloud in many performance aspects. Results show that for the execution of 1000 Tasks on fog, mobile offloaded nodes, JEETA consumes the leas, i.e., 23% of the total energy whereas other baseline approaches consume in between 50–100% of the total energy. Results are validated via real test-bed experiments and trice are driven efficient simulations. Full article
(This article belongs to the Special Issue Cloud Computing and Networking 2019)
Show Figures

Figure 1

Back to TopTop