sensors-logo

Journal Browser

Journal Browser

Feature Papers in Intelligent Sensors 2024

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 31 December 2024 | Viewed by 3908

Special Issue Editors


E-Mail Website
Guest Editor

E-Mail Website
Guest Editor
1. COSYS Department for Automated Vehicles Researches, Université Gustave Eiffel, 25 Allée des Marronnier, 78000 Versailles, France
2. PICS-L Lab, COSYS Department, Université Gustave Eiffel, 25 Allée des Marronnier, 78000 Versailles, France
3. The International Associated Lab ICCAM (France-Australia), Université Gustave Eiffel, 25 Allée des Marronnier, 78000 Versailles, France
Interests: automated driving; multisensor data fusion; cooperative systems; environment perception; extended perception; sensors simulation for ADAS prototyping
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are pleased to announce that Intelligent Sensors is now compiling a collection of papers submitted by the Editorial Board Members (EBMs) of our section and outstanding scholars in this research field. We welcome contributions and recommendations from EBMs.

The aim of this Special Issue is to publish a set of papers that showcase the most insightful and influential original articles or reviews where our section’s EBMs discuss key topics in the field. We expect these papers to be widely read and highly influential within the field. All papers in this Special Issue will be collected into a printed edition book after the deadline and will be carefully promoted.

We would also like to take this opportunity to call on more scholars to join Intelligent Sensors so that we can work together to further develop this exciting field of research. Potential topics include, but are not limited to, the following:

  • Sensor signal processing;
  • Deep learning/machine learning;
  • Data processing/science;
  • Computer vision;
  • Integrated circuits;
  • Human–robot/machine/computer interaction;
  • Artificial intelligence;
  • Intelligent instrumentation;
  • Intelligent control;
  • Intelligent portable platforms;
  • Intelligent computing;
  • Wireless sensor networks (WSNs);
  • Smart sensor networks
  • Intelligent environmental monitoring;
  • Smart cities;
  • Smart home/home automation;
  • Smart manufacturing and industry;
  • Smart energy management/smart grids;
  • Smart agriculture;
  • Smart health monitoring;
  • E-health/M-health;
  • Intelligent emotion recognition;
  • Smart building/smart civil infrastructure;
  • Smart/precision farming;
  • Blockchain 5G/6G.

Prof. Dr. Antonio Fernández-Caballero
Dr. Dominique Gruyer
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • smart sensors
  • intelligent sensors
  • smart sensing
  • intelligent sensing
  • sensor data
  • sensor signal
  • artificial intelligence
  • computer vision

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

21 pages, 10099 KiB  
Article
MYFix: Automated Fixation Annotation of Eye-Tracking Videos
by Negar Alinaghi, Samuel Hollendonner and Ioannis Giannopoulos
Sensors 2024, 24(9), 2666; https://doi.org/10.3390/s24092666 - 23 Apr 2024
Viewed by 209
Abstract
In mobile eye-tracking research, the automatic annotation of fixation points is an important yet difficult task, especially in varied and dynamic environments such as outdoor urban landscapes. This complexity is increased by the constant movement and dynamic nature of both the observer and [...] Read more.
In mobile eye-tracking research, the automatic annotation of fixation points is an important yet difficult task, especially in varied and dynamic environments such as outdoor urban landscapes. This complexity is increased by the constant movement and dynamic nature of both the observer and their environment in urban spaces. This paper presents a novel approach that integrates the capabilities of two foundation models, YOLOv8 and Mask2Former, as a pipeline to automatically annotate fixation points without requiring additional training or fine-tuning. Our pipeline leverages YOLO’s extensive training on the MS COCO dataset for object detection and Mask2Former’s training on the Cityscapes dataset for semantic segmentation. This integration not only streamlines the annotation process but also improves accuracy and consistency, ensuring reliable annotations, even in complex scenes with multiple objects side by side or at different depths. Validation through two experiments showcases its efficiency, achieving 89.05% accuracy in a controlled data collection and 81.50% accuracy in a real-world outdoor wayfinding scenario. With an average runtime per frame of 1.61 ± 0.35 s, our approach stands as a robust solution for automatic fixation annotation. Full article
(This article belongs to the Special Issue Feature Papers in Intelligent Sensors 2024)
Show Figures

Figure 1

15 pages, 1575 KiB  
Article
Cluster-Based Pairwise Contrastive Loss for Noise-Robust Speech Recognition
by Geon Woo Lee and Hong Kook Kim
Sensors 2024, 24(8), 2573; https://doi.org/10.3390/s24082573 - 17 Apr 2024
Viewed by 278
Abstract
This paper addresses a joint training approach applied to a pipeline comprising speech enhancement (SE) and automatic speech recognition (ASR) models, where an acoustic tokenizer is included in the pipeline to leverage the linguistic information from the ASR model to the SE model. [...] Read more.
This paper addresses a joint training approach applied to a pipeline comprising speech enhancement (SE) and automatic speech recognition (ASR) models, where an acoustic tokenizer is included in the pipeline to leverage the linguistic information from the ASR model to the SE model. The acoustic tokenizer takes the outputs of the ASR encoder and provides a pseudo-label through K-means clustering. To transfer the linguistic information, represented by pseudo-labels, from the acoustic tokenizer to the SE model, a cluster-based pairwise contrastive (CBPC) loss function is proposed, which is a self-supervised contrastive loss function, and combined with an information noise contrastive estimation (infoNCE) loss function. This combined loss function prevents the SE model from overfitting to outlier samples and represents the pronunciation variability in samples with the same pseudo-label. The effectiveness of the proposed CBPC loss function is evaluated on a noisy LibriSpeech dataset by measuring both the speech quality scores and the word error rate (WER). The experimental results reveal that the proposed joint training approach using the described CBPC loss function achieves a lower WER than the conventional joint training approaches. In addition, it is demonstrated that the speech quality scores of the SE model trained using the proposed training approach are higher than those of the standalone-SE model and SE models trained using conventional joint training approaches. An ablation study is also conducted to investigate the effects of different combinations of loss functions on the speech quality scores and WER. Here, it is revealed that the proposed CBPC loss function combined with infoNCE contributes to a reduced WER and an increase in most of the speech quality scores. Full article
(This article belongs to the Special Issue Feature Papers in Intelligent Sensors 2024)
Show Figures

Graphical abstract

22 pages, 5116 KiB  
Article
A Comprehensive Exploration of Fidelity Quantification in Computer-Generated Images
by Alexandra Duminil, Sio-Song Ieng and Dominique Gruyer
Sensors 2024, 24(8), 2463; https://doi.org/10.3390/s24082463 - 11 Apr 2024
Viewed by 237
Abstract
Generating realistic road scenes is crucial for advanced driving systems, particularly for training deep learning methods and validation. Numerous efforts aim to create larger and more realistic synthetic datasets using graphics engines or synthetic-to-real domain adaptation algorithms. In the realm of computer-generated images [...] Read more.
Generating realistic road scenes is crucial for advanced driving systems, particularly for training deep learning methods and validation. Numerous efforts aim to create larger and more realistic synthetic datasets using graphics engines or synthetic-to-real domain adaptation algorithms. In the realm of computer-generated images (CGIs), assessing fidelity is challenging and involves both objective and subjective aspects. Our study adopts a comprehensive conceptual framework to quantify the fidelity of RGB images, unlike existing methods that are predominantly application-specific. This is probably due to the data complexity and huge range of possible situations and conditions encountered. In this paper, a set of distinct metrics assessing the level of fidelity of virtual RGB images is proposed. For quantifying image fidelity, we analyze both local and global perspectives of texture and the high-frequency information in images. Our focus is on the statistical characteristics of realistic and synthetic road datasets, using over 28,000 images from at least 10 datasets. Through a thorough examination, we aim to reveal insights into texture patterns and high-frequency components contributing to the objective perception of data realism in road scenes. This study, exploring image fidelity in both virtual and real conditions, takes the perspective of an embedded camera rather than the human eye. The results of this work, including a pioneering set of objective scores applied to real, virtual, and improved virtual data, offer crucial insights and are an asset for the scientific community in quantifying fidelity levels. Full article
(This article belongs to the Special Issue Feature Papers in Intelligent Sensors 2024)
Show Figures

Figure 1

24 pages, 6840 KiB  
Article
Facial Expression Recognition for Measuring Jurors’ Attention in Acoustic Jury Tests
by Reza Jamali, Andrea Generosi, Josè Yuri Villafan, Maura Mengoni, Leonardo Pelagalli, Gianmarco Battista, Milena Martarelli, Paolo Chiariotti, Silvia Angela Mansi, Marco Arnesano and Paolo Castellini
Sensors 2024, 24(7), 2298; https://doi.org/10.3390/s24072298 - 04 Apr 2024
Viewed by 338
Abstract
The perception of sound greatly impacts users’ emotional states, expectations, affective relationships with products, and purchase decisions. Consequently, assessing the perceived quality of sounds through jury testing is crucial in product design. However, the subjective nature of jurors’ responses may limit the accuracy [...] Read more.
The perception of sound greatly impacts users’ emotional states, expectations, affective relationships with products, and purchase decisions. Consequently, assessing the perceived quality of sounds through jury testing is crucial in product design. However, the subjective nature of jurors’ responses may limit the accuracy and reliability of jury test outcomes. This research explores the utility of facial expression analysis in jury testing to enhance response reliability and mitigate subjectivity. Some quantitative indicators allow the research hypothesis to be validated, such as the correlation between jurors’ emotional responses and valence values, the accuracy of jury tests, and the disparities between jurors’ questionnaire responses and the emotions measured by FER (facial expression recognition). Specifically, analysis of attention levels during different statuses reveals a discernible decrease in attention levels, with 70 percent of jurors exhibiting reduced attention levels in the ‘distracted’ state and 62 percent in the ‘heavy-eyed’ state. On the other hand, regression analysis shows that the correlation between jurors’ valence and their choices in the jury test increases when considering the data where the jurors are attentive. The correlation highlights the potential of facial expression analysis as a reliable tool for assessing juror engagement. The findings suggest that integrating facial expression recognition can enhance the accuracy of jury testing in product design by providing a more dependable assessment of user responses and deeper insights into participants’ reactions to auditory stimuli. Full article
(This article belongs to the Special Issue Feature Papers in Intelligent Sensors 2024)
Show Figures

Figure 1

14 pages, 4684 KiB  
Article
Passive Electroluminescence and Photoluminescence Imaging Acquisition of Photovoltaic Modules
by Alberto Redondo-Plaza, José Ignacio Morales-Aragonés, Sara Gallardo-Saavedra, Héctor Felipe Mateo-Romero, Santiago Araujo-Rendón, Ángel L. Zorita-Lamadrid, Víctor Alonso-Gómez and Luis Hernández-Callejo
Sensors 2024, 24(5), 1539; https://doi.org/10.3390/s24051539 - 28 Feb 2024
Viewed by 555
Abstract
In photovoltaic power plant inspections, techniques for module assessment play a crucial role as they enhance fault detection and module characterization. One valuable technique is luminescence. The present paper introduces a novel technique termed passive luminescence. It enhances both electroluminescence and photoluminescence imaging [...] Read more.
In photovoltaic power plant inspections, techniques for module assessment play a crucial role as they enhance fault detection and module characterization. One valuable technique is luminescence. The present paper introduces a novel technique termed passive luminescence. It enhances both electroluminescence and photoluminescence imaging acquisition in photovoltaic power plants under normal operation in high irradiance conditions. This technique is based on the development of an electronic board, which allows the polarity of the module to be changed, enabling the current generated by the photovoltaic string to be injected into the module and producing electroluminescence effects. Additionally, the board can bypass the module and set an open circuit, inducing photoluminescence emission using sunlight as an excitation source. The proper coordination of the board and an InGaAs camera with a bandpass filter has allowed for the integration of a lock-in technique, which has produced electroluminescence and photoluminescence pictures that can be used for fault detection. Full article
(This article belongs to the Special Issue Feature Papers in Intelligent Sensors 2024)
Show Figures

Figure 1

36 pages, 2331 KiB  
Article
Passive Infrared Sensor-Based Occupancy Monitoring in Smart Buildings: A Review of Methodologies and Machine Learning Approaches
by Azad Shokrollahi, Jan A. Persson, Reza Malekian, Arezoo Sarkheyli-Hägele and Fredrik Karlsson
Sensors 2024, 24(5), 1533; https://doi.org/10.3390/s24051533 - 27 Feb 2024
Viewed by 879
Abstract
Buildings are rapidly becoming more digitized, largely due to developments in the internet of things (IoT). This provides both opportunities and challenges. One of the central challenges in the process of digitizing buildings is the ability to monitor these buildings’ status effectively. This [...] Read more.
Buildings are rapidly becoming more digitized, largely due to developments in the internet of things (IoT). This provides both opportunities and challenges. One of the central challenges in the process of digitizing buildings is the ability to monitor these buildings’ status effectively. This monitoring is essential for services that rely on information about the presence and activities of individuals within different areas of these buildings. Occupancy information (including people counting, occupancy detection, location tracking, and activity detection) plays a vital role in the management of smart buildings. In this article, we primarily focus on the use of passive infrared (PIR) sensors for gathering occupancy information. PIR sensors are among the most widely used sensors for this purpose due to their consideration of privacy concerns, cost-effectiveness, and low processing complexity compared to other sensors. Despite numerous literature reviews in the field of occupancy information, there is currently no literature review dedicated to occupancy information derived specifically from PIR sensors. Therefore, this review analyzes articles that specifically explore the application of PIR sensors for obtaining occupancy information. It provides a comprehensive literature review of PIR sensor technology from 2015 to 2023, focusing on applications in people counting, activity detection, and localization (tracking and location). It consolidates findings from articles that have explored and enhanced the capabilities of PIR sensors in these interconnected domains. This review thoroughly examines the application of various techniques, machine learning algorithms, and configurations for PIR sensors in indoor building environments, emphasizing not only the data processing aspects but also their advantages, limitations, and efficacy in producing accurate occupancy information. These developments are crucial for improving building management systems in terms of energy efficiency, security, and user comfort, among other operational aspects. The article seeks to offer a thorough analysis of the present state and potential future advancements of PIR sensor technology in efficiently monitoring and understanding occupancy information by classifying and analyzing improvements in these domains. Full article
(This article belongs to the Special Issue Feature Papers in Intelligent Sensors 2024)
Show Figures

Figure 1

Review

Jump to: Research

41 pages, 17606 KiB  
Review
Reinforcement Learning Algorithms and Applications in Healthcare and Robotics: A Comprehensive and Systematic Review
by Mokhaled N. A. Al-Hamadani, Mohammed A. Fadhel, Laith Alzubaidi and Harangi Balazs
Sensors 2024, 24(8), 2461; https://doi.org/10.3390/s24082461 - 11 Apr 2024
Viewed by 401
Abstract
Reinforcement learning (RL) has emerged as a dynamic and transformative paradigm in artificial intelligence, offering the promise of intelligent decision-making in complex and dynamic environments. This unique feature enables RL to address sequential decision-making problems with simultaneous sampling, evaluation, and feedback. As a [...] Read more.
Reinforcement learning (RL) has emerged as a dynamic and transformative paradigm in artificial intelligence, offering the promise of intelligent decision-making in complex and dynamic environments. This unique feature enables RL to address sequential decision-making problems with simultaneous sampling, evaluation, and feedback. As a result, RL techniques have become suitable candidates for developing powerful solutions in various domains. In this study, we present a comprehensive and systematic review of RL algorithms and applications. This review commences with an exploration of the foundations of RL and proceeds to examine each algorithm in detail, concluding with a comparative analysis of RL algorithms based on several criteria. This review then extends to two key applications of RL: robotics and healthcare. In robotics manipulation, RL enhances precision and adaptability in tasks such as object grasping and autonomous learning. In healthcare, this review turns its focus to the realm of cell growth problems, clarifying how RL has provided a data-driven approach for optimizing the growth of cell cultures and the development of therapeutic solutions. This review offers a comprehensive overview, shedding light on the evolving landscape of RL and its potential in two diverse yet interconnected fields. Full article
(This article belongs to the Special Issue Feature Papers in Intelligent Sensors 2024)
Show Figures

Figure 1

18 pages, 713 KiB  
Review
Recent Advances and Future Perspectives in the E-Nose Technologies Addressed to the Wine Industry
by Gianmarco Alfieri, Margherita Modesti, Riccardo Riggi and Andrea Bellincontro
Sensors 2024, 24(7), 2293; https://doi.org/10.3390/s24072293 - 04 Apr 2024
Viewed by 543
Abstract
Electronic nose devices stand out as pioneering innovations in contemporary technological research, addressing the arduous challenge of replicating the complex sense of smell found in humans. Currently, sensor instruments find application in a variety of fields, including environmental, (bio)medical, food, pharmaceutical, and materials [...] Read more.
Electronic nose devices stand out as pioneering innovations in contemporary technological research, addressing the arduous challenge of replicating the complex sense of smell found in humans. Currently, sensor instruments find application in a variety of fields, including environmental, (bio)medical, food, pharmaceutical, and materials production. Particularly the latter, has seen a significant increase in the adoption of technological tools to assess food quality, gradually supplanting human panelists and thus reshaping the entire quality control paradigm in the sector. This process is happening even more rapidly in the world of wine, where olfactory sensory analysis has always played a central role in attributing certain qualities to a wine. In this review, conducted using sources such as PubMed, Science Direct, and Web of Science, we examined papers published between January 2015 and January 2024. The aim was to explore prevailing trends in the use of human panels and sensory tools (such as the E-nose) in the wine industry. The focus was on the evaluation of wine quality attributes by paying specific attention to geographical origin, sensory defects, and monitoring of production trends. Analyzed results show that the application of E-nose-type sensors performs satisfactorily in that trajectory. Nevertheless, the integration of this type of analysis with more classical methods, such as the trained sensory panel test and with the application of destructive instrument volatile compound (VOC) detection (e.g., gas chromatography), still seems necessary to better explore and investigate the aromatic characteristics of wines. Full article
(This article belongs to the Special Issue Feature Papers in Intelligent Sensors 2024)
Show Figures

Figure 1

Back to TopTop