sensors-logo

Journal Browser

Journal Browser

Image Sensors: Systems and Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (30 September 2020) | Viewed by 47210

Special Issue Editors


E-Mail Website
Guest Editor
1. ESPOL Polytechnic University, Guayaquil, Ecuador
2. Computer Vision Center, Barcelona, Spain
Interests: multispectral image processing; deep learning for image processing and computer vision; 2D/3D images and model registration; cross-spectral image segmentation

E-Mail Website
Guest Editor
Rey Juan Carlos University, Madrid, Spain
Interests: document image analysis; handwriting recognition; computer biometrics; deep learning for image processing and computer vision; cognitive architectures

E-Mail
Guest Editor
Universidad del Bio-Bio, Collao 1202, Concepcion, Chile
Interests: multispectral image processing; machine vision; deep learning for image processing and computer vision

E-Mail Website
Guest Editor
Universidad Tecnologica de Chile INACAP, Av. Vitacura 10.151, Vitacura, Santiago, Chile
Interests: embedded computer vision; multispectral image processing; underwater robotics and stem education

Special Issue Information

Dear Colleagues,

In recent years, systems and applications of image sensors have become ubiquitous: In most of our everyday activities, we can find something related with or based on image sensor information. Although we are not aware of it most of the time, image sensors are present everywhere, and we rely on systems and applications based on the image processing—from remote sensing that process satellite images used for agricultural and urban monitoring or weather forecast, to urban video cameras used for traffic management and surveillance in smart city applications. This considerable increase in computer vision applications has been boosted in recent years with the increase of mobile devices. Everybody is carrying a mobile phone, and in most cases with several image sensors (some mobile phones have up to 5 cameras); almost intuitively, we take a picture of a given text, instead of writing it on a piece of paper. One of the critical features for the success of mobile phone markets is its camera’s capabilities. The increase in visual sensors is also pushed by applications in the transportation domain (there are many cars with visual sensors already mounted from the factory) or urban video cameras (cameras used for traffic management, as well as video surveillance). Finally, biometric applications—for instance, face recognition, which is driving next-generation authentication—are contributing to the expansion of image sensors.

This Special Issue is intended to review the state-of-the-art on systems and applications of image sensors. Contributions from (but not limited to) the following topics will be covered:

  • Acquisition devices;
  • Processing (filtering, coloring, feature extraction, and matching);
  • Representation (2D, 3D, 4D);
  • Understanding (machine learning, pattern recognition, biometric applications);
  • Deep/transfer learning, domain adaptation;
  • Robotics and autonomous vehicles;
  • Sensing for agriculture and food safety;
  • Remote sensing;
  • Applications of image sensors to Smart Cities.

Dr. Angel D. Sappa
Dr. Angel Sanchez
Dr. Cristhian Aguilera
Dr. Cristhian A. Aguilera
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

31 pages, 3727 KiB  
Article
Multi-Sensor Extrinsic Calibration Using an Extended Set of Pairwise Geometric Transformations
by Vitor Santos, Daniela Rato, Paulo Dias and Miguel Oliveira
Sensors 2020, 20(23), 6717; https://doi.org/10.3390/s20236717 - 24 Nov 2020
Cited by 4 | Viewed by 2255
Abstract
Systems composed of multiple sensors for exteroceptive perception are becoming increasingly common, such as mobile robots or highly monitored spaces. However, to combine and fuse those sensors to create a larger and more robust representation of the perceived scene, the sensors need to [...] Read more.
Systems composed of multiple sensors for exteroceptive perception are becoming increasingly common, such as mobile robots or highly monitored spaces. However, to combine and fuse those sensors to create a larger and more robust representation of the perceived scene, the sensors need to be properly registered among them, that is, all relative geometric transformations must be known. This calibration procedure is challenging as, traditionally, human intervention is required in variate extents. This paper proposes a nearly automatic method where the best set of geometric transformations among any number of sensors is obtained by processing and combining the individual pairwise transformations obtained from an experimental method. Besides eliminating some experimental outliers with a standard criterion, the method exploits the possibility of obtaining better geometric transformations between all pairs of sensors by combining them within some restrictions to obtain a more precise transformation, and thus a better calibration. Although other data sources are possible, in this approach, 3D point clouds are obtained by each sensor, which correspond to the successive centers of a moving ball its field of view. The method can be applied to any sensors able to detect the ball and the 3D position of its center, namely, LIDARs, mono cameras (visual or infrared), stereo cameras, and TOF cameras. Results demonstrate that calibration is improved when compared to methods in previous works that do not address the outliers problem and, depending on the context, as explained in the results section, the multi-pairwise technique can be used in two different methodologies to reduce uncertainty in the calibration process. Full article
(This article belongs to the Special Issue Image Sensors: Systems and Applications)
Show Figures

Figure 1

17 pages, 10282 KiB  
Article
Pre-Trained Deep Convolutional Neural Network for Clostridioides Difficile Bacteria Cytotoxicity Classification Based on Fluorescence Images
by Andrzej Brodzicki, Joanna Jaworek-Korjakowska, Pawel Kleczek, Megan Garland and Matthew Bogyo
Sensors 2020, 20(23), 6713; https://doi.org/10.3390/s20236713 - 24 Nov 2020
Cited by 17 | Viewed by 2544
Abstract
Clostridioides difficile infection (CDI) is an enteric bacterial disease that is increasing in incidence worldwide. Symptoms of CDI range from mild diarrhea to severe life-threatening inflammation of the colon. While antibiotics are standard-of-care treatments for CDI, they are also the biggest risk factor [...] Read more.
Clostridioides difficile infection (CDI) is an enteric bacterial disease that is increasing in incidence worldwide. Symptoms of CDI range from mild diarrhea to severe life-threatening inflammation of the colon. While antibiotics are standard-of-care treatments for CDI, they are also the biggest risk factor for development of CDI and recurrence. Therefore, novel therapies that successfully treat CDI and protect against recurrence are an unmet clinical need. Screening for novel drug leads is often tested by manual image analysis. The process is slow, tedious and is subject to human error and bias. So far, little work has focused on computer-aided screening for drug leads based on fluorescence images. Here, we propose a novel method to identify characteristic morphological changes in human fibroblast cells exposed to C. difficile toxins based on computer vision algorithms supported by deep learning methods. Classical image processing algorithms for the pre-processing stage are used together with an adjusted pre-trained deep convolutional neural network responsible for cell classification. In this study, we take advantage of transfer learning methodology by examining pre-trained VGG-19, ResNet50, Xception, and DenseNet121 convolutional neural network (CNN) models with adjusted, densely connected classifiers. We compare the obtained results with those of other machine learning algorithms and also visualize and interpret them. The proposed models have been evaluated on a dataset containing 369 images with 6112 cases. DenseNet121 achieved the highest results with a 93.5% accuracy, 92% sensitivity, and 95% specificity, respectively. Full article
(This article belongs to the Special Issue Image Sensors: Systems and Applications)
Show Figures

Figure 1

20 pages, 3054 KiB  
Article
Detecting, Tracking and Counting People Getting On/Off a Metropolitan Train Using a Standard Video Camera
by Sergio A. Velastin, Rodrigo Fernández, Jorge E. Espinosa and Alessandro Bay
Sensors 2020, 20(21), 6251; https://doi.org/10.3390/s20216251 - 2 Nov 2020
Cited by 28 | Viewed by 6787
Abstract
The main source of delays in public transport systems (buses, trams, metros, railways) takes place in their stations. For example, a public transport vehicle can travel at 60 km per hour between stations, but its commercial speed (average en-route speed, including any intermediate [...] Read more.
The main source of delays in public transport systems (buses, trams, metros, railways) takes place in their stations. For example, a public transport vehicle can travel at 60 km per hour between stations, but its commercial speed (average en-route speed, including any intermediate delay) does not reach more than half of that value. Therefore, the problem that public transport operators must solve is how to reduce the delay in stations. From the perspective of transport engineering, there are several ways to approach this issue, from the design of infrastructure and vehicles to passenger traffic management. The tools normally available to traffic engineers are analytical models, microscopic traffic simulation, and, ultimately, real-scale laboratory experiments. In any case, the data that are required are number of passengers that get on and off from the vehicles, as well as the number of passengers waiting on platforms. Traditionally, such data has been collected manually by field counts or through videos that are then processed by hand. On the other hand, public transport networks, specially metropolitan railways, have an extensive monitoring infrastructure based on standard video cameras. Traditionally, these are observed manually or with very basic signal processing support, so there is significant scope for improving data capture and for automating the analysis of site usage, safety, and surveillance. This article shows a way of collecting and analyzing the data needed to feed both traffic models and analyze laboratory experimentation, exploiting recent intelligent sensing approaches. The paper presents a new public video dataset gathered using real-scale laboratory recordings. Part of this dataset has been annotated by hand, marking up head locations to provide a ground-truth on which to train and evaluate deep learning detection and tracking algorithms. Tracking outputs are then used to count people getting on and off, achieving a mean accuracy of 92% with less than 0.15% standard deviation on 322 mostly unseen dataset video sequences. Full article
(This article belongs to the Special Issue Image Sensors: Systems and Applications)
Show Figures

Figure 1

23 pages, 61534 KiB  
Article
An Architectural Multi-Agent System for a Pavement Monitoring System with Pothole Recognition in UAV Images
by Luís Augusto Silva, Héctor Sanchez San Blas, David Peral García, André Sales Mendes and Gabriel Villarubia González
Sensors 2020, 20(21), 6205; https://doi.org/10.3390/s20216205 - 30 Oct 2020
Cited by 46 | Viewed by 6473
Abstract
In recent years, maintenance work on public transport routes has drastically decreased in many countries due to difficult economic situations. The various studies that have been conducted by groups of drivers and groups related to road safety concluded that accidents are increasing due [...] Read more.
In recent years, maintenance work on public transport routes has drastically decreased in many countries due to difficult economic situations. The various studies that have been conducted by groups of drivers and groups related to road safety concluded that accidents are increasing due to the poor conditions of road surfaces, even affecting the condition of vehicles through costly breakdowns. Currently, the processes of detecting any type of damage to a road are carried out manually or are based on the use of a road vehicle, which incurs a high labor cost. To solve this problem, many research centers are investigating image processing techniques to identify poor-condition road areas using deep learning algorithms. The main objective of this work is to design of a distributed platform that allows the detection of damage to transport routes using drones and to provide the results of the most important classifiers. A case study is presented using a multi-agent system based on PANGEA that coordinates the different parts of the architecture using techniques based on ubiquitous computing. The results obtained by means of the customization of the You Only Look Once (YOLO) v4 classifier are promising, reaching an accuracy of more than 95%. The images used have been published in a dataset for use by the scientific community. Full article
(This article belongs to the Special Issue Image Sensors: Systems and Applications)
Show Figures

Figure 1

17 pages, 4849 KiB  
Article
Thermographic Inspection of Internal Defects in Steel Structures: Analysis of Signal Processing Techniques in Pulsed Thermography
by Yoonjae Chung, Ranjit Shrestha, Seungju Lee and Wontae Kim
Sensors 2020, 20(21), 6015; https://doi.org/10.3390/s20216015 - 23 Oct 2020
Cited by 27 | Viewed by 4738
Abstract
This study performed an experimental investigation on pulsed thermography to detect internal defects, the major degradation phenomena in several structures of the secondary systems in nuclear power plants as well as industrial pipelines. The material losses due to wall thinning were simulated by [...] Read more.
This study performed an experimental investigation on pulsed thermography to detect internal defects, the major degradation phenomena in several structures of the secondary systems in nuclear power plants as well as industrial pipelines. The material losses due to wall thinning were simulated by drilling flat-bottomed holes (FBH) on the steel plate. FBH of different sizes in varying depths were considered to evaluate the detection capability of the proposed technique. A short and high energy light pulse was deposited on a sample surface, and an infrared camera was used to analyze the effect of the applied heat flux. The three most established signal processing techniques of thermography, namely thermal signal reconstruction (TSR), pulsed phase thermography (PPT), and principal component thermography (PCT), have been applied to raw thermal images. Then, the performance of each technique was evaluated concerning enhanced defect detectability and signal to noise ratio (SNR). The results revealed that TSR enhanced the defect detectability, detecting the maximum number of defects, PPT provided the highest SNR, especially for the deeper defects, and PCT provided the highest SNR for the shallower defects. Full article
(This article belongs to the Special Issue Image Sensors: Systems and Applications)
Show Figures

Figure 1

23 pages, 5900 KiB  
Article
SSD vs. YOLO for Detection of Outdoor Urban Advertising Panels under Multiple Variabilities
by Ángel Morera, Ángel Sánchez, A. Belén Moreno, Ángel D. Sappa and José F. Vélez
Sensors 2020, 20(16), 4587; https://doi.org/10.3390/s20164587 - 15 Aug 2020
Cited by 53 | Viewed by 10407
Abstract
This work compares Single Shot MultiBox Detector (SSD) and You Only Look Once (YOLO) deep neural networks for the outdoor advertisement panel detection problem by handling multiple and combined variabilities in the scenes. Publicity panel detection in images offers important advantages both in [...] Read more.
This work compares Single Shot MultiBox Detector (SSD) and You Only Look Once (YOLO) deep neural networks for the outdoor advertisement panel detection problem by handling multiple and combined variabilities in the scenes. Publicity panel detection in images offers important advantages both in the real world as well as in the virtual one. For example, applications like Google Street View can be used for Internet publicity and when detecting these ads panels in images, it could be possible to replace the publicity appearing inside the panels by another from a funding company. In our experiments, both SSD and YOLO detectors have produced acceptable results under variable sizes of panels, illumination conditions, viewing perspectives, partial occlusion of panels, complex background and multiple panels in scenes. Due to the difficulty of finding annotated images for the considered problem, we created our own dataset for conducting the experiments. The major strength of the SSD model was the almost elimination of False Positive (FP) cases, situation that is preferable when the publicity contained inside the panel is analyzed after detecting them. On the other side, YOLO produced better panel localization results detecting a higher number of True Positive (TP) panels with a higher accuracy. Finally, a comparison of the two analyzed object detection models with different types of semantic segmentation networks and using the same evaluation metrics is also included. Full article
(This article belongs to the Special Issue Image Sensors: Systems and Applications)
Show Figures

Figure 1

22 pages, 12662 KiB  
Article
Automatic 360° Mono-Stereo Panorama Generation Using a Cost-Effective Multi-Camera System
by Hayat Ullah, Osama Zia, Jun Ho Kim, Kyungjin Han and Jong Weon Lee
Sensors 2020, 20(11), 3097; https://doi.org/10.3390/s20113097 - 30 May 2020
Cited by 17 | Viewed by 5126
Abstract
In recent years, 360° videos have gained the attention of researchers due to their versatility and applications in real-world problems. Also, easy access to different visual sensor kits and easily deployable image acquisition devices have played a vital role in the growth of [...] Read more.
In recent years, 360° videos have gained the attention of researchers due to their versatility and applications in real-world problems. Also, easy access to different visual sensor kits and easily deployable image acquisition devices have played a vital role in the growth of interest in this area by the research community. Recently, several 360° panorama generation systems have demonstrated reasonable quality generated panoramas. However, these systems are equipped with expensive image sensor networks where multiple cameras are mounted in a circular rig with specific overlapping gaps. In this paper, we propose an economical 360° panorama generation system that generates both mono and stereo panoramas. For mono panorama generation, we present a drone-mounted image acquisition sensor kit that consists of six cameras placed in a circular fashion with optimal overlapping gap. The hardware of our proposed image acquisition system is configured in such way that no user input is required to stitch multiple images. For stereo panorama generation, we propose a lightweight, cost-effective visual sensor kit that uses only three cameras to cover 360° of the surroundings. We also developed stitching software that generates both mono and stereo panoramas using a single image stitching pipeline where the panorama generated by our proposed system is automatically straightened without visible seams. Furthermore, we compared our proposed system with existing mono and stereo contents generation systems in both qualitative and quantitative perspectives, and the comparative measurements obtained verified the effectiveness of our system compared to existing mono and stereo generation systems. Full article
(This article belongs to the Special Issue Image Sensors: Systems and Applications)
Show Figures

Figure 1

12 pages, 4778 KiB  
Article
Crosstalk Reduction Using a Dual Energy Window Scatter Correction in Compton Imaging
by Makoto Sakai, Raj Kumar Parajuli, Yoshiki Kubota, Nobuteru Kubo, Mitsutaka Yamaguchi, Yuto Nagao, Naoki Kawachi, Mikiko Kikuchi, Kazuo Arakawa and Mutsumi Tashiro
Sensors 2020, 20(9), 2453; https://doi.org/10.3390/s20092453 - 26 Apr 2020
Cited by 8 | Viewed by 4893
Abstract
Compton cameras can simultaneously detect multi-isotopes; however, when simultaneous imaging is performed, crosstalk artifacts appear on the images obtained using a low-energy window. In conventional single-photon emission computed tomography, a dual energy window (DEW) subtraction method is used to reduce crosstalk. This study [...] Read more.
Compton cameras can simultaneously detect multi-isotopes; however, when simultaneous imaging is performed, crosstalk artifacts appear on the images obtained using a low-energy window. In conventional single-photon emission computed tomography, a dual energy window (DEW) subtraction method is used to reduce crosstalk. This study aimed to evaluate the effectiveness of employing the DEW technique to reduce crosstalk artifacts in Compton images obtained using low-energy windows. To this end, in this study, we compared reconstructed images obtained using either a photo-peak window or a scatter window by performing image subtraction based on the differences between the two images. Simulation calculations were performed to obtain the list data for the Compton camera using a 171 and a 511 keV point source. In the images reconstructed using these data, crosstalk artifacts were clearly observed in the images obtained using a 171 keV photo-peak energy window. In the images obtained using a scatter window (176–186 keV), only crosstalk artifacts were visible. The DEW method could eliminate the influence of high-energy sources on the images obtained with a photo-peak window, thereby improving quantitative capability. This was also observed when the DEW method was used on experimentally obtained images. Full article
(This article belongs to the Special Issue Image Sensors: Systems and Applications)
Show Figures

Graphical abstract

17 pages, 7711 KiB  
Article
Magnetic Ring Multi-Defect Stereo Detection System Based on Multi-Camera Vision Technology
by Xinman Zhang, Weiyong Gong and Xuebin Xu
Sensors 2020, 20(2), 392; https://doi.org/10.3390/s20020392 - 10 Jan 2020
Cited by 9 | Viewed by 3067
Abstract
Magnetic rings are the most widely used magnetic material product in industry. The existing manual defect detection method for magnetic rings has high cost, low efficiency and low precision. To address this issue, a magnetic ring multi-defect stereo detection system based on multi-camera [...] Read more.
Magnetic rings are the most widely used magnetic material product in industry. The existing manual defect detection method for magnetic rings has high cost, low efficiency and low precision. To address this issue, a magnetic ring multi-defect stereo detection system based on multi-camera vision technology is developed to complete the automatic inspection of magnetic rings. The system can detect surface defects and measure ring height simultaneously. Two image processing algorithms are proposed, namely, the image edge removal algorithm (IERA) and magnetic ring location algorithm (MRLA), separately. On the basis of these two algorithms, connected domain filtering methods for crack, fiber and large-area defects are established to complete defect inspection. This system achieves a recognition rate of 100% for defects such as crack, adhesion, hanger adhesion and pitting. Furthermore, the recognition rate for fiber and foreign matter defects attains 92.5% and 91.5%, respectively. The detection speed exceeds 120 magnetic rings per minutes, and the precision is within 0.05 mm. Both precision and speed meet the requirements of real-time quality inspection in actual production. Full article
(This article belongs to the Special Issue Image Sensors: Systems and Applications)
Show Figures

Figure 1

Back to TopTop