Advances in Hardware-Software Codesign

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (31 October 2023) | Viewed by 24327

Special Issue Editors


E-Mail Website
Guest Editor
SATIE Laboratory CNRS Joint Research Unit, UMR 8029, Paris-Saclay University, 91190 Gif-sur-Yvette, France
Interests: embedded systems; hardware–software codesign; smart sensors; image processing; simultaneous localization and mapping

E-Mail Website
Guest Editor
SATIE Laboratory CNRS Joint Research Unit, UMR 8029, Paris-Saclay University, 91190 Gif-sur-Yvette, France
Interests: multisensor data fusion; mobile perception; intelligent transportation systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
SATIE Laboratory CNRS Joint Research Unit, UMR 8029, Paris-Saclay University, 91190 Gif-sur-Yvette, France
Interests: embedded systems; multisensor perception; mobile perception; intelligent transportation systems

Special Issue Information

Dear Colleagues,

Design perception, driving assistance, data analysis, or control systems with embedded hardware architectures face multiple constraints: sensor interfaces, computing power, time constraints, or energy consumption. The growing complexity of applications in the field of embedded systems, particularly in a real-time context, requires an algorithm-architecture mapping approach or a software–hardware codesign.

This concept is based on a detailed study of the constraints considering technological developments of hardware architecture and software tools. This approach, which may be formalized, lies in the art of breaking down an activity into functional blocks to distribute them as good as possible on a target architecture. Searching optimality in the software–hardware codesign leads to making choices about hardware architectures (CPUs, FPGAs, GPUs, DSPs, SoCs, etc.), sensors, instrumentation, topology, type of calculation, and temporal consistency. The principle of this approach consists in translating an algorithm at the behavioral level by a data graph with an optimized parallelism on the ad-hoc architecture. This graph is progressively modified to explore different options to implement the algorithm, thus making it possible to guide the choices in terms of allocation of resources, data paths, sequencing, and data format. This requires a study that will explore the space of partitioning and calculations placement on ad-hoc hardware processing units. The aim is to find the best architectural instantiation. The definition of a model is based on the use of a set of software and algorithmic test vectors evaluated on hardware targets and an analytical study of performance with an approach of bringing processing closer to the sensors. This methodology makes it possible to design systems that respects time constraints and minimizes the hardware resources in a given application area.

The aim of this Special Issue is to focus on hardware–software codesign advanced methods for signal, video, or image processing to give a large view of this complex field by covering topics that include but are not limited to the following areas:

  • Embedded systems;
  • Hardware architectures for signal and image processing: manycore processors, FPGAs, DSPs, GPUs, SoCs;
  • Advances in algorithm-architecture mapping;
  • Hardware–software synthesis, computer-aided tools (compilers, hardware synthesis);
  • Reconfigurable systems;
  • Real time processing;
  • Vision systems;
  • Smart sensors;
  • Sensor data fusion;
  • Systems for robotics and automation;
  • Embedded artificial intelligence;
  • Applications to simultaneous localization and mapping (SLAM), autonomous vehicles, UAVs, medical, biomedical, smart agriculture, smart cities.

Dr. Abdelhafid El Ouardi
Dr. Sergio Rodriguez
Dr. Bastien Vincke
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • hardware–software codesign
  • algorithm-architecture mapping
  • system modeling and partitioning
  • real-time processing
  • software optimization and performance evaluation
  • smart sensors
  • embedded systems

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 2370 KiB  
Article
Dual-Core PLC for Cooperating Projects with Software Implementation
by Marcin Hubacz and Bartosz Trybus
Electronics 2023, 12(23), 4730; https://doi.org/10.3390/electronics12234730 - 22 Nov 2023
Viewed by 824
Abstract
Development of a general-purpose PLC based on a typical dual-core processor as a hardware platform is presented. The cores run two cooperating projects involving data exchange through shared memory. Such a solution is equivalent to a single-core PLC running two tasks by means [...] Read more.
Development of a general-purpose PLC based on a typical dual-core processor as a hardware platform is presented. The cores run two cooperating projects involving data exchange through shared memory. Such a solution is equivalent to a single-core PLC running two tasks by means of a real-time operating system. Upgrading to a typical programming tool involves defining which of the global variables are shared, and whether a variable in a particular core is read-from or written-to the shared memory. Extensions to core runtimes consist of read-from at the beginning of the scan cycle and write-to at the end, and of an algorithm for protecting the shared memory against access conflicts. As an example, the proposed solution is implemented in an engineering tool with runtime based on a virtual machine concept. The PLC prototype is based on a heterogeneous ARM dual-core STM32 microcontroller running different projects. The innovation in the research lies in showing how to run two projects in a dual-core PLC without using an operating system. Extension to multiple projects for a multi-core processor is can be accomplished in a similar manner. Full article
(This article belongs to the Special Issue Advances in Hardware-Software Codesign)
Show Figures

Figure 1

19 pages, 5341 KiB  
Article
Performance Evaluation of C/C++, MicroPython, Rust and TinyGo Programming Languages on ESP32 Microcontroller
by Ignas Plauska, Agnius Liutkevičius and Audronė Janavičiūtė
Electronics 2023, 12(1), 143; https://doi.org/10.3390/electronics12010143 - 28 Dec 2022
Cited by 10 | Viewed by 19100
Abstract
The rapid growth of the Internet of Things (IoT) and its applications requires high computational efficiency, low-cost, and low-power solutions for various IoT devices. These include a wide range of microcontrollers that are used to collect, process, and transmit IoT data. ESP32 is [...] Read more.
The rapid growth of the Internet of Things (IoT) and its applications requires high computational efficiency, low-cost, and low-power solutions for various IoT devices. These include a wide range of microcontrollers that are used to collect, process, and transmit IoT data. ESP32 is a microcontroller with built-in wireless connectivity, suitable for various IoT applications. The ESP32 chip is gaining more popularity, both in academia and in the developer community, supported by a number of software libraries and programming languages. While low- and middle-level languages, such as C/C++ and Rust, are believed to be the most efficient, TinyGo and MicroPython are more developer-friendly low-complexity languages, suitable for beginners and allowing more rapid coding. This paper evaluates the efficiency of the available ESP32 programming languages, namely C/C++, MicroPython, Rust, and TinyGo, by comparing their execution performance. Several popular data and signal processing algorithms were implemented in these languages, and their execution times were compared: Fast Fourier Transform (FFT), Cyclic Redundancy Check (CRC), Secure Hash Algorithm (SHA), Infinite Impulse Response (IIR), and Finite Impulse Response (FIR) filters. The results show that the C/C++ implementations were fastest in most cases, closely followed by TinyGo and Rust, while MicroPython programs were many times slower than implementations in other programming languages. Therefore, the C/C++, TinyGo, and Rust languages are more suitable when execution and response time are the key factors, while Python can be used for less strict system requirements, enabling a faster and less complicated development process. Full article
(This article belongs to the Special Issue Advances in Hardware-Software Codesign)
Show Figures

Figure 1

20 pages, 6333 KiB  
Article
Rider in the Loop Dynamic Motorcycle Simulator: An Instrumentation Strategy Focused on Human Acceptability
by Pauline Michel, Samir Bouaziz, Flavien Delgehier and Stéphane Espié
Electronics 2022, 11(17), 2690; https://doi.org/10.3390/electronics11172690 - 27 Aug 2022
Viewed by 1605
Abstract
Human-in-the-loop driving simulation aims to create the illusion of driving by stimulating the driver’s sensory systems in as realistic conditions as possible. However, driving simulators can only produce a subset of the sensory stimuli that would be available in a real driving situation, [...] Read more.
Human-in-the-loop driving simulation aims to create the illusion of driving by stimulating the driver’s sensory systems in as realistic conditions as possible. However, driving simulators can only produce a subset of the sensory stimuli that would be available in a real driving situation, depending on the degree of refinement of their design. This subset must be carefully chosen because it is crucial for human acceptability. Our focus is the design of a physical dynamic (i.e., motion-based) motorcycle-riding simulator. For its instrumentation, we focused on the rider acceptability of all sub-systems and the simulator as a whole. The significance of our work lies in this particular approach; the acceptability of the riding illusion for the rider is critical for the validity of any results acquired using a simulator. In this article, we detail the design of the hardware/software architecture of our simulator under this constraint; sensors, actuators, and dataflows allow us to (1) capture the rider’s actions in real-time; (2) render the motorcycle’s behavior to the rider; and (3) measure and study rider/simulated motorcycle interactions. We believe our methodology could be adopted by future designers of motorcycle-riding simulators and other human-in-the-loop simulators to improve their rendering (including motion) quality and acceptability. Full article
(This article belongs to the Special Issue Advances in Hardware-Software Codesign)
Show Figures

Figure 1

17 pages, 2614 KiB  
Article
Design Framework for ReRAM-Based DNN Accelerators with Accuracy and Hardware Evaluation
by Hsu-Yu Kao, Shih-Hsu Huang and Wei-Kai Cheng
Electronics 2022, 11(13), 2107; https://doi.org/10.3390/electronics11132107 - 05 Jul 2022
Cited by 1 | Viewed by 1575
Abstract
To achieve faster design closure, there is a need to provide a design framework for the design of ReRAM-based DNN (deep neural network) accelerator at the early design stage. In this paper, we develop a high-level ReRAM-based DNN accelerator design framework. The proposed [...] Read more.
To achieve faster design closure, there is a need to provide a design framework for the design of ReRAM-based DNN (deep neural network) accelerator at the early design stage. In this paper, we develop a high-level ReRAM-based DNN accelerator design framework. The proposed design framework has the following three features. First, we consider ReRAM’s non-linear properties, including lognormal distribution, leakage current, IR drop, and sneak path. Thus, model accuracy and circuit performance can be accurately evaluated. Second, we use SystemC with TLM modeling method to build our virtual platform. To our knowledge, the proposed design framework is the first behavior-level ReRAM deep learning accelerator simulator that can simulate real hardware behavior. Third, the proposed design framework can evaluate not only model accuracy but also hardware cost. As a result, the proposed design framework can be used for behavior-level design space exploration. In the experiments, we have deployed different DNN models on the virtual platform. Circuit performance can be easily evaluated on the proposed design framework. Furthermore, experiment results also show that the noise effects are different in different ReRAM array architectures. Based on the proposed design framework, we can easily mitigate noise effects by tuning architecture parameters. Full article
(This article belongs to the Special Issue Advances in Hardware-Software Codesign)
Show Figures

Figure 1

Back to TopTop