sensors-logo

Journal Browser

Journal Browser

The Rise of EdgeAI and TinyML for the Next-Generation IoT

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (15 August 2023) | Viewed by 1713

Special Issue Editors


E-Mail Website
Guest Editor
1. Fondazione Bruno Kessler, Via Sommarive 18, 38123 Trento, Italy
2. Faculty of Engineering, eCampus University, Via Isimbardi 10, 22060 Novedrate, Italy
Interests: computational intelligence; soft-computing techniques; Internet of Things; power-aware engineering design; embedded systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Fondazione Bruno Kessler, 38123 Trento, Italy
Interests: edge intelligence; edge computing; IoT; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor

E-Mail Website
Guest Editor
Eurotech Group, 33020 Amaro, UD, Italy
Interests: hardware formal verification; hardware/software co-design and co-simulation; advanced hardware architectures; IoT; edge computing

Special Issue Information

Dear Colleagues,

Edge computing is becoming a widely adopted technological paradigm. It is especially valid in application domains such as smart industry, smart city, smart ports, and, more generally, in the design of robust and reliable applications where bandwidth, privacy, latency, and responsiveness impose strong constraints. The original idea of edge computing comes from pushing the execution of cloud-scale functionalities at the edge of the network, close to where data are produced. Edge computing has required hard re-engineering efforts, especially concerning how applications are designed, developed, packaged, and deployed, in consideration of the new challenges imposed by different and usually constrained execution environments (e.g., embedded PCs, PLCs, single-board computers, microcontrollers). At the same time, the hype surrounding artificial intelligence (AI) has strongly influenced how its models and tools are conceived or adapted to fit the small capacities of far edge/IoT devices. Around this fulcrum, two new exciting research and innovation directions are gaining momentum, namely edgeAI and TinyML, unleashing new challenges regarding how data are collected and processed, and how AI model architectures are designed, adapted, optimized, deployed, updated, trained, and executed along the cloud-to-thing continuum.

Topics relevant to this Special Issue include, but are not limited to, the following:

  • EdgeAI and TinyML in practical applications;
  • Edge computing architectures supporting AI deployment;
  • Methods for the optimization of models for AI at the edge/far-edge of the network;
  • Methods to adapt cloud-scale AI models for the edge;
  • Training of AI models at the edge;
  • Transfer learning of AI model at the edge of the network;
  • Novel training approaches for AI models at the edge;
  • Distributed and/or decentralized orchestration of AI pipelines along the cloud-to-thing continuum;
  • Federated learning approaches with tiny IoT devices.

Prof. Dr. Massimo Vecchio
Dr. Mattia Antonini
Dr. Miguel Pincheira
Dr. Panagiotis Trakadas
Dr. Paolo Azzoni
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Internet of Things
  • embedded intelligence
  • deep learning
  • MLOps
  • TinyMLOps
  • self-supervised learning
  • far edge
  • cloud-to-thing continuum

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 468 KiB  
Article
AoCStream: All-on-Chip CNN Accelerator with Stream-Based Line-Buffer Architecture and Accelerator-Aware Pruning
by Hyeong-Ju Kang and Byung-Do Yang
Sensors 2023, 23(19), 8104; https://doi.org/10.3390/s23198104 - 27 Sep 2023
Viewed by 702
Abstract
Convolutional neural networks (CNNs) play a crucial role in many EdgeAI and TinyML applications, but their implementation usually requires external memory, which degrades the feasibility of such resource-hungry environments. To solve this problem, this paper proposes memory-reduction methods at the algorithm and architecture [...] Read more.
Convolutional neural networks (CNNs) play a crucial role in many EdgeAI and TinyML applications, but their implementation usually requires external memory, which degrades the feasibility of such resource-hungry environments. To solve this problem, this paper proposes memory-reduction methods at the algorithm and architecture level, implementing a reasonable-performance CNN with the on-chip memory of a practical device. At the algorithm level, accelerator-aware pruning is adopted to reduce the weight memory amount. For activation memory reduction, a stream-based line-buffer architecture is proposed. In the proposed architecture, each layer is implemented by a dedicated block, and the layer blocks operate in a pipelined way. Each block has a line buffer to store a few rows of input data instead of a frame buffer to store the whole feature map, reducing intermediate data-storage size. The experimental results show that the object-detection CNNs of MobileNetV1/V2 and an SSDLite variant, widely used in TinyML applications, can be implemented even on a low-end FPGA without external memory. Full article
(This article belongs to the Special Issue The Rise of EdgeAI and TinyML for the Next-Generation IoT)
Show Figures

Figure 1

Back to TopTop