sensors-logo

Journal Browser

Journal Browser

Feature Papers in the Sensing and Imaging Section 2021

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 23154

Special Issue Editors


E-Mail Website
Guest Editor
Laboratoire Hubert Curien, CNRS UMR 5516, Université de Lyon, 42000 Saint-Étienne, France
Interests: fiber sensors; optical sensors; image sensors; optical materials; radiation effects
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Applied Sciences and Intelligent Systems “ScienceApp", Consiglio Nazionale delle Ricerche, c/o Dhitech Campus Universitario Ecotekne, Via Monteroni s/n, 73100 Lecce, Italy
Interests: computer vision; pattern recognition; video surveillance; object tracking; deep learning; audience measurements; visual interaction; human–robot interaction
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are pleased to announce that the Sensors Sensing and Imaging Section is now compiling a collection of papers submitted by the Editorial Board Members (EBMs) of our section and outstanding scholars in this research field. We welcome contributions as well as recommendations from the EBMs.

We expect original papers and review articles showing the state-of-the-art, theoretical, and applicative advances, new experimental discoveries, and novel technological improvements regarding sensing and imaging. We expect these papers to be widely read and highly influential within the field. All papers in this Special Issue will be well promoted.

We would also like to take this opportunity to call on more excellent scholars to join the Sensing and Imaging Section so that we can work together to further develop this exciting field of research.

Prof. Dr. Sylvain Girard
Dr. Cosimo Distante
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

22 pages, 15610 KiB  
Article
Evaluating the Accuracy of the Azure Kinect and Kinect v2
by Gregorij Kurillo, Evan Hemingway, Mu-Lin Cheng and Louis Cheng
Sensors 2022, 22(7), 2469; https://doi.org/10.3390/s22072469 - 23 Mar 2022
Cited by 29 | Viewed by 6817
Abstract
The Azure Kinect represents the latest generation of Microsoft Kinect depth cameras. Of interest in this article is the depth and spatial accuracy of the Azure Kinect and how it compares to its predecessor, the Kinect v2. In one experiment, the two sensors [...] Read more.
The Azure Kinect represents the latest generation of Microsoft Kinect depth cameras. Of interest in this article is the depth and spatial accuracy of the Azure Kinect and how it compares to its predecessor, the Kinect v2. In one experiment, the two sensors are used to capture a planar whiteboard at 15 locations in a grid pattern with laser scanner data serving as ground truth. A set of histograms reveals the temporal-based random depth error inherent in each Kinect. Additionally, a two-dimensional cone of accuracy illustrates the systematic spatial error. At distances greater than 2.5 m, we find the Azure Kinect to have improved accuracy in both spatial and temporal domains as compared to the Kinect v2, while for distances less than 2.5 m, the spatial and temporal accuracies were found to be comparable. In another experiment, we compare the distribution of random depth error between each Kinect sensor by capturing a flat wall across the field of view in horizontal and vertical directions. We find the Azure Kinect to have improved temporal accuracy over the Kinect v2 in the range of 2.5 to 3.5 m for measurements close to the optical axis. The results indicate that the Azure Kinect is a suitable substitute for Kinect v2 in 3D scanning applications. Full article
(This article belongs to the Special Issue Feature Papers in the Sensing and Imaging Section 2021)
Show Figures

Figure 1

20 pages, 7381 KiB  
Article
Recognition of COVID-19 from CT Scans Using Two-Stage Deep-Learning-Based Approach: CNR-IEMN
by Fares Bougourzi, Riccardo Contino, Cosimo Distante and Abdelmalik Taleb-Ahmed
Sensors 2021, 21(17), 5878; https://doi.org/10.3390/s21175878 - 31 Aug 2021
Cited by 14 | Viewed by 3142
Abstract
Since the appearance of the COVID-19 pandemic (at the end of 2019, Wuhan, China), the recognition of COVID-19 with medical imaging has become an active research topic for the machine learning and computer vision community. This paper is based on the results obtained [...] Read more.
Since the appearance of the COVID-19 pandemic (at the end of 2019, Wuhan, China), the recognition of COVID-19 with medical imaging has become an active research topic for the machine learning and computer vision community. This paper is based on the results obtained from the 2021 COVID-19 SPGC challenge, which aims to classify volumetric CT scans into normal, COVID-19, or community-acquired pneumonia (Cap) classes. To this end, we proposed a deep-learning-based approach (CNR-IEMN) that consists of two main stages. In the first stage, we trained four deep learning architectures with a multi-tasks strategy for slice-level classification. In the second stage, we used the previously trained models with an XG-boost classifier to classify the whole CT scan into normal, COVID-19, or Cap classes. Our approach achieved a good result on the validation set, with an overall accuracy of 87.75% and 96.36%, 52.63%, and 95.83% sensitivities for COVID-19, Cap, and normal, respectively. On the other hand, our approach achieved fifth place on the three test datasets of SPGC in the COVID-19 challenge, where our approach achieved the best result for COVID-19 sensitivity. In addition, our approach achieved second place on two of the three testing sets. Full article
(This article belongs to the Special Issue Feature Papers in the Sensing and Imaging Section 2021)
Show Figures

Figure 1

25 pages, 93653 KiB  
Article
Optimization of X-ray Investigations in Dentistry Using Optical Coherence Tomography
by Ralph-Alexandru Erdelyi, Virgil-Florin Duma, Cosmin Sinescu, George Mihai Dobre, Adrian Bradu and Adrian Podoleanu
Sensors 2021, 21(13), 4554; https://doi.org/10.3390/s21134554 - 02 Jul 2021
Cited by 22 | Viewed by 6029
Abstract
The most common imaging technique for dental diagnoses and treatment monitoring is X-ray imaging, which evolved from the first intraoral radiographs to high-quality three-dimensional (3D) Cone Beam Computed Tomography (CBCT). Other imaging techniques have shown potential, such as Optical Coherence Tomography (OCT). We [...] Read more.
The most common imaging technique for dental diagnoses and treatment monitoring is X-ray imaging, which evolved from the first intraoral radiographs to high-quality three-dimensional (3D) Cone Beam Computed Tomography (CBCT). Other imaging techniques have shown potential, such as Optical Coherence Tomography (OCT). We have recently reported on the boundaries of these two types of techniques, regarding. the dental fields where each one is more appropriate or where they should be both used. The aim of the present study is to explore the unique capabilities of the OCT technique to optimize X-ray units imaging (i.e., in terms of image resolution, radiation dose, or contrast). Two types of commercially available and widely used X-ray units are considered. To adjust their parameters, a protocol is developed to employ OCT images of dental conditions that are documented on high (i.e., less than 10 μm) resolution OCT images (both B-scans/cross sections and 3D reconstructions) but are hardly identified on the 200 to 75 μm resolution panoramic or CBCT radiographs. The optimized calibration of the X-ray unit includes choosing appropriate values for the anode voltage and current intensity of the X-ray tube, as well as the patient’s positioning, in order to reach the highest possible X-rays resolution at a radiation dose that is safe for the patient. The optimization protocol is developed in vitro on OCT images of extracted teeth and is further applied in vivo for each type of dental investigation. Optimized radiographic results are compared with un-optimized previously performed radiographs. Also, we show that OCT can permit a rigorous comparison between two (types of) X-ray units. In conclusion, high-quality dental images are possible using low radiation doses if an optimized protocol, developed using OCT, is applied for each type of dental investigation. Also, there are situations when the X-ray technology has drawbacks for dental diagnosis or treatment assessment. In such situations, OCT proves capable to provide qualitative images. Full article
(This article belongs to the Special Issue Feature Papers in the Sensing and Imaging Section 2021)
Show Figures

Figure 1

Review

Jump to: Research

34 pages, 978 KiB  
Review
Macro- and Micro-Expressions Facial Datasets: A Survey
by Hajer Guerdelli, Claudio Ferrari, Walid Barhoumi, Haythem Ghazouani and Stefano Berretti
Sensors 2022, 22(4), 1524; https://doi.org/10.3390/s22041524 - 16 Feb 2022
Cited by 15 | Viewed by 6140
Abstract
Automatic facial expression recognition is essential for many potential applications. Thus, having a clear overview on existing datasets that have been investigated within the framework of face expression recognition is of paramount importance in designing and evaluating effective solutions, notably for neural networks-based [...] Read more.
Automatic facial expression recognition is essential for many potential applications. Thus, having a clear overview on existing datasets that have been investigated within the framework of face expression recognition is of paramount importance in designing and evaluating effective solutions, notably for neural networks-based training. In this survey, we provide a review of more than eighty facial expression datasets, while taking into account both macro- and micro-expressions. The proposed study is mostly focused on spontaneous and in-the-wild datasets, given the common trend in the research is that of considering contexts where expressions are shown in a spontaneous way and in a real context. We have also provided instances of potential applications of the investigated datasets, while putting into evidence their pros and cons. The proposed survey can help researchers to have a better understanding of the characteristics of the existing datasets, thus facilitating the choice of the data that best suits the particular context of their application. Full article
(This article belongs to the Special Issue Feature Papers in the Sensing and Imaging Section 2021)
Show Figures

Figure 1

Back to TopTop