sensors-logo

Journal Browser

Journal Browser

The Role of Fog and Edge Computing in Machine Learning-Based Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (31 October 2023) | Viewed by 10489

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer, Control and Management Engineering, Università degli Studi di Roma La Sapienzadisabled, Rome, Italy
Interests: dynamic distributed system; mobile wireless networks

Special Issue Information

Dear Colleagues,

These last few years have seen us witness a new spring of Artificial Intelligence (AI) and, in particular, of Machine Learning (ML), triggered by a favorable contingency of technological improvement, availability of an enormous amount of validated data, as well as the use of new types of neural networks.

We have first seen how ML-based algorithms have surpassed some human skills; for example, in the game of chess (DeepBlue), in videogames (Deep Q-Learning), as well as in strategic games (AlphaGo), and now how they can be applied in various domains; for example, in self-driving vehicles, personal assistance, image recognition for diagnosis, and in general as data-driven optimization systems.

In line with this trend, ML is applied progressively and extensively to sensor-generated data.

A current trend in IT is to bring data processing capacity closer to data sources, exemplified in the so-called Fog and Edge Computing (FEC) architecture paradigm. Again, the enabling factors are technological, such as the increase in processing capacity in System on Chip devices, the availability of low-cost hardware accelerators, as well as the adaptation of ML libraries for devices with limited processing resources.

Because of the broad usage of sensors to both civil and industrial domains, as exemplified by the IoT applications to industry 4.0 or smart cities models, the volume of data to be processed by ML-based applications will certainly increase in volume in the future.

ML at the edge allows to perform, at least in part, data inference operations near to the sensors generating the data and not exclusively from servers physically located in cloud data centers. The resulting lower latency due to not sending all data to the cloud will reduce the response time, which is the fundamental need for real-time applications and beyond.

In addition to inference operations, training operations (typically much more computationally demanding) could also gradually be performed on the edge, which greatly raises the level of privacy of the data used.

In this Special Issue, we invite experts from multiple areas to contribute and share their ideas and findings to illustrate the role of fog and edge computing in the trend described above, and how to face the many challenges that will have to be solved to make the most of this paradigm. A list of topics that would be welcome in this issue includes: 

  • Algorithms and solutions needed to take advantage of the preprocessing of data generated by sensors at the edges or at fog nodes, including improving privacy through local processing, which avoids sending sensitive data to the cloud, such as distributed privacy preserving algorithms.
  • Optimal algorithm for allocating, splitting, and offloading sensor data processing between edge, fog, and cloud nodes; for example, when using computer vision algorithms to detect, recognize and track individual moving objects or detect activities from real-time sensor flows.
  • Models, testbeds, and experimental reports on case studies that measure or predict the performance improvements of low-cost edge devices, i.e. System on Chip (SoC) equipped with hardware accelerators, such as GPUs and / or TPUs, including trade-off between power consumption, detection accuracy and processing delay.
  • Collaborative and distributed inference and training algorithms running on the edge, including Federating Learning algorithms and related privacy issues.
  • Performance models for fog/edge computing infrastructure supporting sensor-generated data, ranging from detailed models of a single sensor or node to large scale approximated models.
  • Security challenges arising from wireless communications between data sources and collecting nodes considering scenarios when it is hard to prevent physical access to the infrastructures.
  • Application-aware management and orchestration frameworks for edge and fog resources, including algorithms for workload balancing generated from continuous monitoring sensors, which need to be distributed to heterogeneous processing nodes.

Dr. Roberto Beraldi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 3939 KiB  
Article
Research on Convolutional Neural Network Inference Acceleration and Performance Optimization for Edge Intelligence
by Yong Liang, Junwen Tan, Zhisong Xie, Zetao Chen, Daoqian Lin and Zhenhao Yang
Sensors 2024, 24(1), 240; https://doi.org/10.3390/s24010240 - 31 Dec 2023
Viewed by 762
Abstract
In recent years, edge intelligence (EI) has emerged, combining edge computing with AI, and specifically deep learning, to run AI algorithms directly on edge devices. In practical applications, EI faces challenges related to computational power, power consumption, size, and cost, with the primary [...] Read more.
In recent years, edge intelligence (EI) has emerged, combining edge computing with AI, and specifically deep learning, to run AI algorithms directly on edge devices. In practical applications, EI faces challenges related to computational power, power consumption, size, and cost, with the primary challenge being the trade-off between computational power and power consumption. This has rendered traditional computing platforms unsustainable, making heterogeneous parallel computing platforms a crucial pathway for implementing EI. In our research, we leveraged the Xilinx Zynq 7000 heterogeneous computing platform, employed high-level synthesis (HLS) for design, and implemented two different accelerators for LeNet-5 using loop unrolling and pipelining optimization techniques. The experimental results show that when running at a clock speed of 100 MHz, the PIPELINE accelerator, compared to the UNROLL accelerator, experiences an 8.09% increase in power consumption but speeds up by 14.972 times, making the PIPELINE accelerator superior in performance. Compared to the CPU, the PIPELINE accelerator reduces power consumption by 91.37% and speeds up by 70.387 times, while compared to the GPU, it reduces power consumption by 93.35%. This study provides two different optimization schemes for edge intelligence applications through design and experimentation and demonstrates the impact of different quantization methods on FPGA resource consumption. These experimental results can provide a reference for practical applications, thereby providing a reference hardware acceleration scheme for edge intelligence applications. Full article
Show Figures

Figure 1

17 pages, 7452 KiB  
Article
Lightweight and Energy-Aware Monocular Depth Estimation Models for IoT Embedded Devices: Challenges and Performances in Terrestrial and Underwater Scenarios
by Lorenzo Papa, Gabriele Proietti Mattia, Paolo Russo, Irene Amerini and Roberto Beraldi
Sensors 2023, 23(4), 2223; https://doi.org/10.3390/s23042223 - 16 Feb 2023
Cited by 3 | Viewed by 2126
Abstract
The knowledge of environmental depth is essential in multiple robotics and computer vision tasks for both terrestrial and underwater scenarios. Moreover, the hardware on which this technology runs, generally IoT and embedded devices, are limited in terms of power consumption, and therefore, models [...] Read more.
The knowledge of environmental depth is essential in multiple robotics and computer vision tasks for both terrestrial and underwater scenarios. Moreover, the hardware on which this technology runs, generally IoT and embedded devices, are limited in terms of power consumption, and therefore, models with a low-energy footprint are required to be designed. Recent works aim at enabling depth perception using single RGB images on deep architectures, such as convolutional neural networks and vision transformers, which are generally unsuitable for real-time inferences on low-power embedded hardware. Moreover, such architectures are trained to estimate depth maps mainly on terrestrial scenarios due to the scarcity of underwater depth data. Purposely, we present two lightweight architectures based on optimized MobileNetV3 encoders and a specifically designed decoder to achieve fast inferences and accurate estimations over embedded devices, a feasibility study to predict depth maps over underwater scenarios, and an energy assessment to understand which is the effective energy consumption during the inference. Precisely, we propose the MobileNetV3S75 configuration to infer on the 32-bit ARM CPU and the MobileNetV3LMin for the 8-bit Edge TPU hardware. In underwater settings, the proposed design achieves comparable estimations with fast inference performances compared to state-of-the-art methods. Moreover, we statistically proved that the architecture of the models has an impact on the energy footprint in terms of Watts required by the device during the inference. Then, the proposed architectures would be considered to be a promising approach for real-time monocular depth estimation by offering the best trade-off between inference performances, estimation error and energy consumption, with the aim of improving the environment perception for underwater drones, lightweight robots and Internet of things. Full article
Show Figures

Figure 1

20 pages, 1009 KiB  
Article
On-Device IoT-Based Predictive Maintenance Analytics Model: Comparing TinyLSTM and TinyModel from Edge Impulse
by Irene Niyonambaza Mihigo, Marco Zennaro, Alfred Uwitonze, James Rwigema and Marcelo Rovai
Sensors 2022, 22(14), 5174; https://doi.org/10.3390/s22145174 - 11 Jul 2022
Cited by 13 | Viewed by 3059
Abstract
A precise prediction of the health status of industrial equipment is of significant importance to determine its reliability and lifespan. This prediction provides users information that is useful in determining when to service, repair, or replace the unhealthy equipment’s components. In the last [...] Read more.
A precise prediction of the health status of industrial equipment is of significant importance to determine its reliability and lifespan. This prediction provides users information that is useful in determining when to service, repair, or replace the unhealthy equipment’s components. In the last decades, many works have been conducted on data-driven prognostic models to estimate the asset’s remaining useful life. These models require updates on the novel happenings from regular diagnostics, otherwise, failure may happen before the estimated time due to different facts that may oblige rapid maintenance actions, including unexpected replacement. Adding to offline prognostic models, the continuous monitoring and prediction of remaining useful life can prevent failures, increase the useful lifespan through on-time maintenance actions, and reduce the unnecessary preventive maintenance and associated costs. This paper presents the ability of the two real-time tiny predictive analytics models: tiny long short-term memory (TinyLSTM) and sequential dense neural network (DNN). The model (TinyModel) from Edge Impulse is used to predict the remaining useful life of the equipment by considering the status of its different components. The equipment degradation insights were assessed through the real-time data gathered from operating equipment. To label our dataset, fuzzy logic based on the maintainer’s expertise is used to generate maintenance priorities, which are later used to compute the actual remaining useful life. The predictive analytic models were developed and performed well, with an evaluation loss of 0.01 and 0.11, respectively, for the LSTM and model from Edge Impulse. Both models were converted into TinyModels for on-device deployment. Unseen data were used to simulate the deployment of both TinyModels. Conferring to the evaluation and deployment results, both TinyLSTM and TinyModel from Edge Impulse are powerful in real-time predictive maintenance, but the model from Edge Impulse is much easier in terms of development, conversion to Tiny version, and deployment. Full article
Show Figures

Figure 1

17 pages, 520 KiB  
Article
Edge Caching Based on Collaborative Filtering for Heterogeneous ICN-IoT Applications
by Divya Gupta, Shalli Rani, Syed Hassan Ahmed, Sahil Verma, Muhammad Fazal Ijaz and Jana Shafi
Sensors 2021, 21(16), 5491; https://doi.org/10.3390/s21165491 - 15 Aug 2021
Cited by 39 | Viewed by 3610
Abstract
The substantial advancements offered by the edge computing has indicated serious evolutionary improvements for the internet of things (IoT) technology. The rigid design philosophy of the traditional network architecture limits its scope to meet future demands. However, information centric networking (ICN) is envisioned [...] Read more.
The substantial advancements offered by the edge computing has indicated serious evolutionary improvements for the internet of things (IoT) technology. The rigid design philosophy of the traditional network architecture limits its scope to meet future demands. However, information centric networking (ICN) is envisioned as a promising architecture to bridge the huge gaps and maintain IoT networks, mostly referred as ICN-IoT. The edge-enabled ICN-IoT architecture always demands efficient in-network caching techniques for supporting better user’s quality of experience (QoE). In this paper, we propose an enhanced ICN-IoT content caching strategy by enabling artificial intelligence (AI)-based collaborative filtering within the edge cloud to support heterogeneous IoT architecture. This collaborative filtering-based content caching strategy would intelligently cache content on edge nodes for traffic management at cloud databases. The evaluations has been conducted to check the performance of the proposed strategy over various benchmark strategies, such as LCE, LCD, CL4M, and ProbCache. The analytical results demonstrate the better performance of our proposed strategy with average gain of 15% for cache hit ratio, 12% reduction in content retrieval delay, and 28% reduced average hop count in comparison to best considered LCD. We believe that the proposed strategy will contribute an effective solution to the related studies in this domain. Full article
Show Figures

Figure 1

Back to TopTop